Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Bulk-Insert for native SQL INSERT statement

0 Kudos

Hi.

We run into a big performance problem by inserting data sets to an external Oracle data base.

The identified problem is the native SQL insert statement itself.

It seems that there is no chance to execute a Bulkinsert with native SQL.

I tried several ways to develope natvie SQL logic. Including online generated subroutine pools, generated reports, ADBC Classes, inline EXECSQL statements etc. All versions propably working but the performance of insert statement.

We need to export millions of data sets and it is not practicable to execute one insert statement for a single dataset.

Is there any way to boost the performance for inserting data sets?

No Open SQL is possible because the target DB is not the SAP DB (Multidatabase connection).

Thanks in advance

20 REPLIES 20

Former Member
0 Kudos

Have you tried to run everything in Oracle with a single statement like


INSERT INTO target (colum1, colum2) SELECT colum1, colum2 FROM source;

?

Edited by: Carsten Grafflage on Nov 19, 2010 1:56 PM

0 Kudos

Thanks for replay.

No. I didn't tried that because the Data source and the Target DB is not the same.

So, I like to read from SAP DB (DDIC) and write that data to a second additional connected database.

The multi database connection was configured with transaction DBCO.

0 Kudos

Well, shouldn't this work with an Oracle database link?


INSERT INTO target@target_system (colum1, colum2) SELECT colum1, colum2 FROM source@sap_system

You can access all SAP tables from Oracle. If you need to read the data through some SAP-API function (or whatever), you might still select everything on SAP side to a temporary table, and then use a INSERT with subquery over the database link? I have to say, i never tried this myself, but this is what i would try at least.

Thats a good gues. But the main problem is that one data set is written with one statement.

And when I understand your suggestion that will be not different.

I need something like "INSERT INTO [schema].[table] FROM TABLE [SAP-DDIC Table]."

Unfotunatly the appendix "Connection" of ABAP Open-SQL only works if the external database table is the exact replic of the SAP DDIC tabel. E.g. the MARA table must be named "MARA" with all fields included.

But it was a good idea.

Thanks.

volker_borowski2
Active Contributor
0 Kudos

Hi Thomas,

can you explain what a data-set is in your desciption.

Is it a set of multiple rows (internal table) with an insert target for a single table in the remote DB?

Is is a single row needed to be distributed to several tables?

As from your description I'd assume it is a single row for a single table, and you'll not be able to do bulk inserts in that case.

... or did I get it wrong?

Volker

0 Kudos

Hi Volker.

With "data-set" I mean a single result line. E.g. I read the whole MARA than a data set is one line of MARA table with all columns. If I use a join statement the data set consists of the selected columns.

I will "copy" a DDIC table to an external oracel database. So, not the main database used by SAP system.

I tried the CONNECTION thing of the open SQL Insert statement. The problem is, that this causes in an short dump with the error message "SAPSQL_WA_TOO_SMALL". But the target table include all fields of the table and the fields have the data types as the function module "DB_MAP_DDFIELDS" as returned. There is a difference of 4 Bytes between the two workareas.

former_member192616
Active Contributor
0 Kudos

Hi Thomas,

so writing to the ORACLE database costs time because you do it row by row?

You said you tried ADBC. With ADBC it should be possible to have bulk processing

with native SQL. What was your issue with that?

Kind regards,

Hermann

0 Kudos

Hi Hermann,

"...so writing to the ORACLE database costs time because you do it row by row?"

>>Yes. So it is.

"You said you tried ADBC. With ADBC it should be possible to have bulk processing

with native SQL. What was your issue with that?"

>> My issue with that is, that it is not possible to do a bulk insert with ADBC .

Kind Regards

Thomas

Edited by: Thomas Steier on Nov 22, 2010 4:44 PM

0 Kudos

Hi again,

still not sure, if you have "stored" your result somewhere (itab, db-table)

or if you have a cursor open (either select or join) and are doing the inserts

inside this loop...

As for the first option, this thread might help:

As for the second: you will surely have a single INSERT in that case, that needs to be more speedy.

This usually only goes faster with giving more resources

(i.e. faster network, bigger network tcp windows (there are SDU / TDU

parameters in tnsnames.ora, in generall you should not touch them, but in this case,

may be you should check their size, depending on the release on the target.

tcp.nodelay is another network option, that might either help or disturb)).

Less checks on the target help as well, like less secondary indexes, less constraints, ...

Volker

0 Kudos

Hi Thomas,

> >> My issue with that is, that it is not possible to do a bulk insert with ADBC .

i must admit i never tried it, i thougth it was possible when i read the documentation

but maybe i mixed it up with array fetch.

Kind regards,

Hermann

0 Kudos

Hi,

additionaly to Volker's comments on network and index i would like to add more options

that might help.

Measure the time for the inserts on the client (MAXDB) side e.g. with ST05

and compare them with the server (ORACLE) e.g. times in V$SQL. If there is

a siginificant difference he network tips from Volker might help a lot.

If the time is mostly spent at the server check out indexes and constraints like

Volker mentioned. Additionally "nologging" migth be an option but requires a

full backup afterwards.

Another question is whether parallelization is a way to go... (?)

Kind regars,

Hermann

0 Kudos

... oops, did I get something wrong ?

I thought both DBs were oracle?!?!?!

With one of them not belonging to a SAP system, but both beinig oracle?

NOLOGGING unfortunately does not have the desired effect for single INSERTS.

Need to hunt out for a reference, but I mind to remember, that the effect of

the LOGGING attribute only applies for array operations.

Volker

0 Kudos

Hi Volker,

>

> ... oops, did I get something wrong ?

> I thought both DBs were oracle?!?!?!

> With one of them not belonging to a SAP system, but both beinig oracle?

it might be me who got it wrong. I assumed we talk about a ORACLE and a

MAXDB... . With both DB's on ORACLE your previous post should be enough

to help out.

> NOLOGGING unfortunately does not have the desired effect for single INSERTS.

> Need to hunt out for a reference, but I mind to remember, that the effect of

> the LOGGING attribute only applies for array operations.

and your memories are right (mine were wrong). On askTom i found this nice

matrix:


Table Mode    Insert Mode     ArchiveLog mode      result
-----------   -------------   -----------------    ----------
LOGGING       APPEND          ARCHIVE LOG          redo generated
NOLOGGING     APPEND          ARCHIVE LOG          no redo
LOGGING       no append       ""                   redo generated
NOLOGGING     no append       ""                   redo generated
LOGGING       APPEND          noarchive log mode   no redo
NOLOGGING     APPEND          noarchive log mode   no redo
LOGGING       no append       noarchive log mode   redo generated
NOLOGGING     no append       noarchive log mode   redo generated

So nologging only works with append (which works only with array options).

So i think with your previous post everything is said.

Kind regards,

Hermann

0 Kudos

Hi @all.

First many thanks for your help. I think this is a special problem and I spend days in research of this topic.

Now, thanks to your help, I found a way to do bulk insert with native SQL and I like to share this knowledge with you.

@Carsten: The DB link hint was great but unfortunatly you need permissions on the database which you will never get on a productive system 'cause of security means. So, this would be the easiest but most risky way.

@Volker: Never spend time in working with the CONNECTION apendix of the INSERT statement. I tried it in several ways at the expense of my nerves ;). The result is, that you will have more issues than results. This is a good hint if someone wants to transfer data from one SAP DDIC to another SAP DDIC. But it is no solution to transfer data from SAP DDIC to an plain Oracle database table.

The hint to tnsnames.ora and configuration is great.

@Hermann: Ok. The ADBC framework actual does not support bulk inserts. BUT tataaa.....

THE SOLUTION:

Assable the insert statements without the INSERT part. E.g. "INTO (MATNR, MATART, MATKL) VALUES ('000000000000000001', 'ABC', null)". Collect some of the assambled statements in an internal table and in potions of not more than 200 INTO-Statements assamle the SQL statement as follows:

INSERT ALL

INTO (MATNR, MATART, MATKL) VALUES ('000000000000000001', 'ABC', null)

INTO (MATNR, MATART, MATKL) VALUES ('000000000000000002', 'DEF', 'YYY')

INTO (MATNR, MATART, MATKL) VALUES ('000000000000000003', 'GHJ', null)

.....

SELECT * FROM dual

Now you can use the ADBC framework to execute update with this statement and it will insert all your values in your tabel.

I found out that it is absolutly dependent on network traffic and system performance how fast the insert statement will processed.

In one moment the instert was processed in 16sec. and in the other it takes 59sek for the same insert.

The amount of 200 can vary.

0 Kudos

Hi Thomas,

thank you very much for sharing. I don't think of such options often since i rarely use native SQL.

Very interesting.

>I found out that it is absolutly dependent on network traffic and system performance how fast the insert statement will processed.

Just out of interest: How did you find out or what have you been looking at?

On the database side it should / could be possible to speed up your statement with the

append (direct path insert),

nologging (needs a backup afterwards since it will ommit redo)

parallel (run it in parallel)

hints. Would be interesting to see f e.g. the append parallel makes a difference...

Kind regards,

Hermann

0 Kudos

By the way. Is it possible to supply an internal table to a stored procedure on the oracle db?

0 Kudos

In one moment the instert was processed in 16sec. and in the other it takes 59sek for the same insert.

The amount of 200 can vary.

Hi,

is that 16 secs against 59 secs for ALL your INSERTS or for a block of 200 records?

For 200 records even 16 seconds are WAY TOO SLOW.

And when I think of the required parsing overhead to dynamically build

all those diffrent INSERT statements from these large text strings,

I think somewhere else must be a problem.

I simply can not believe, that parsing this text

> INSERT ALL

> INTO (MATNR, MATART, MATKL) VALUES ('000000000000000001', 'ABC', null)

> INTO (MATNR, MATART, MATKL) VALUES ('000000000000000002', 'DEF', 'YYY')

> INTO (MATNR, MATART, MATKL) VALUES ('000000000000000003', 'GHJ', null)

:

:

which will have a statement length of 200*70+stuff (this will be 13k + for the dynamic SQL TEXT)

PLUS the time to really execute that code is 4 times faster than doing 200 times a precompiled an reusable

single INSERT statement from the shared pool which just gets the datastructures feed

(counting '000000000000000002', 'DEF', 'YYY' to 35 BYTEs+overhead per record).

I assume, there is something else responsible on the configuration side on your target DB.

Volker

0 Kudos

Hi Thomas,

Happy ti know that your problem has solved. I have a similar requirement in updating the external databse table from SAP.

In the current code the Native SQL for UPDATE is written inside the LOOP and it has max 5000 records so this database hit is happening those many times.

I want to know how exactly you have written the code to insert bulk of records into the database so that I can use the similar method.

Thanks a million,

Bharati

0 Kudos

Hi Thomas,

I hae a similar problem to update the external database table from SAP in bulk.

In the current code the Native SQL for UPDATE has written inside the LOOP at ITAB. Where this ITAB contains number of records which varies from 4000 to 5000. I am facing the problem with performance of this code during this UPDATE statement. I wan tot know how exactly you have written the code for bulk INSERT.

Thanks a million,

Bharati

0 Kudos

Hi.

I have written the SQL command as described as in this thread above.

Execute the INSERT ALL Statement as String with the ADBC framework.

But my further experiance is that you don't have any meaningful benefit thrugh it. It is not as faster as preshared statements.

This is probably the ADBC framework is already optimizing SQL request.

However the only way I know to extract data with acceptable Performance is to extract the values read in a CSV file and to push it with FTP to the target system. On target system (where the database is located on ) you can read and insert the data in seconds.

I analysed two applications which are specialized on data extraction and both went the way of data extraction into a file in such cases where a lot of data have to be extracted.

By the way. 5000 entries is as not as much you have to consider a way of performace improvement. I spoke about millions of entries.