Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

commit work when uploading large files

former_member226519
Active Contributor
0 Kudos

We are uploading large files ( 15 * 10exp6 records ) into a database table.

Is there any suggestion whether or not to perform a COMMIT WORK?

If yes: how to calculate the number of records to perform the commit after?

1 ACCEPTED SOLUTION

Former Member
0 Kudos

>

> We are uploading large files ( 15 * 10exp6 records ) into a database table.

> Is there any suggestion whether or not to perform a COMMIT WORK?

> If yes: how to calculate the number of records to perform the commit after?

Hi,

COMMITing after every n bytes or so is more accurate for the database because a record's width

can be different and an internal table of 1000 records could be small or big in terms of bytes

so be aware of that.

My proposal:

1. Read through the file packagewise:

A package is made up of an internal table of a specific size wich you can calculate from the width

of the internal table in bytes and a given fixed commit size.

COMMIT_SIZE : the fixed commit size in bytes (524288 bytes)

TAB_WIDTH : structure size of internal table (add data types field length)

PACKAGE_SIZE : no. of internal table entries (no. of loops until refresh internal table)

PACKAGE_SIZE = COMMIT_SIZE / TAB_WIDTH

2. You can use this int. table for an array insert into your database tables and commit after the array insert:

INSERT dbtable from table intTable

COMMIT WORK.

3. Ex.:

COMMIT_SIZE = 524288 bytes

TAB_WIDTH = 143 bytes

PACKAGE_SIZE = 524288 / 143 = 3666 records

Here you would insert 3666 records and commit.

4. You can adjust the commit size as you want resulting in bigger (or smaller) package sizes.

But you have an approx. measure how many bytes are written to the database wich is important in

regarding to database undo area after an error in the transaction.

bye

yk

8 REPLIES 8

raymond_giuseppi
Active Contributor
0 Kudos

You may also use the FM [DB_COMMIT|https://www.sdn.sap.com/irj/sdn/advancedsearch?cat=sdn_all&query=db_commit&adv=false&sortby=cm_rnd_rankvalue#] to insure the writing of data to database, use the [COMMIT WORK|https://www.sdn.sap.com/irj/sdn/advancedsearch?cat=sdn_forums_rm&query=commitWORKtimeoutluw&adv=false&sortby=cm_rnd_rankvalue#] to avoid time-out and memory consumption (also free the internal tables of data no longer requested)

Regards

0 Kudos

Hi,

COMMIT WORK ends a SAP Logical unit of work (LUW) wich could be made of several database sessions these sessions are ended with mentioned db commits - no need to call the function.

You would call the function if you want to have more control over the current db session you are in

without affecting other sessions.

Bye

yk

Former Member
0 Kudos

>

> We are uploading large files ( 15 * 10exp6 records ) into a database table.

> Is there any suggestion whether or not to perform a COMMIT WORK?

> If yes: how to calculate the number of records to perform the commit after?

Hi,

COMMITing after every n bytes or so is more accurate for the database because a record's width

can be different and an internal table of 1000 records could be small or big in terms of bytes

so be aware of that.

My proposal:

1. Read through the file packagewise:

A package is made up of an internal table of a specific size wich you can calculate from the width

of the internal table in bytes and a given fixed commit size.

COMMIT_SIZE : the fixed commit size in bytes (524288 bytes)

TAB_WIDTH : structure size of internal table (add data types field length)

PACKAGE_SIZE : no. of internal table entries (no. of loops until refresh internal table)

PACKAGE_SIZE = COMMIT_SIZE / TAB_WIDTH

2. You can use this int. table for an array insert into your database tables and commit after the array insert:

INSERT dbtable from table intTable

COMMIT WORK.

3. Ex.:

COMMIT_SIZE = 524288 bytes

TAB_WIDTH = 143 bytes

PACKAGE_SIZE = 524288 / 143 = 3666 records

Here you would insert 3666 records and commit.

4. You can adjust the commit size as you want resulting in bigger (or smaller) package sizes.

But you have an approx. measure how many bytes are written to the database wich is important in

regarding to database undo area after an error in the transaction.

bye

yk

0 Kudos

>

> COMMIT_SIZE = 524288 bytes

> TAB_WIDTH = 143 bytes

> PACKAGE_SIZE = 524288 / 143 = 3666 records

YK - I like this approach, but how do you determine COMMIT_SIZE?

Rob

0 Kudos

Hi Rob,

ok - I tried to keep it simple. Volker was asking for no. of records so I set a fixed commit size of 512 Kbyte (or 1024 Kbyte, ...).

" define the TAB_WIDTH

describe field int_tab_struc length TAB_WIDTH in byte mode.

PACKAGE_SIZE = KB512 / TAB_WIDTH

...

or

PACKAGE_SIZE = KB1024 / TAB_WIDTH

...

Well, the problem is you can't really know for sure if your db undo area is large enough to handle the complete transaction (ever tried to insert zillions of rows in ORACLE and got the famous ORA-1555 because the DB Admins want to save space because it costs so much - if you would have enough space in undo

I would try to limit my commits as much as possible.

So divide (your resources) and conquer (in the transaction).

I think in SAP BW they have something similar (data package size in terms of no. of records or size in Kbytes;

wich hit's first fires a commit).

The only way to survive in a heavy load system ...

bye

yk

0 Kudos

If I'm reading you correctly, you set it arbitrarily large and then try it? If it works - fine (or increase it); if it doesn't - decrease.

Rob

former_member226519
Active Contributor
0 Kudos

thanks a lot.

I had it coded like this:

describe table gt_in.

gv_maxrecs = trunc( gc_maxmem / sy-tleng ).

and I wanted a second opinion to calm my customer because the report runs pretty long.

thanks a lot

former_member194613
Active Contributor
0 Kudos

make it simple, commit every 1000 records.

Check record size, if really large (>5k) commit after 500 records.