Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Read 20000K FROM APPLICATION SERVER FILE

Former Member
0 Kudos

Hi All,

I am reading data from appication server file.The amount of data in file is nearly 20000K.

What are the diffrenet steps i can incorporate to reduce the runtime of the program.

Few options i thought is:---

1> Use field symbols in loop and read statement.

2> Use field symbol in read statement.

I have one doubt .....will the field symbols give better performance over the syntax internal table with header line in loop and read statement.

*What can be done form programming side to +read+ and then process data and then again place data back on application server, so that run time can be reduced*

1 ACCEPTED SOLUTION

former_member194613
Active Contributor
0 Kudos

LOOP at itab1
   READ TABLE itab2    BINARY SEARCH

Sorted tables are not faster than BINARY SEARCH, but more convenient.

You must sort only itab2, not itab1.

And sort only once.

If you need a LOOP for itab2, because there is no 1:1 relation, then Sorted tables are much better.

If the relation is 1:1 and you can define a unique key for itab2, then a hash table is better and faster.

see pictures in that blog!

Measurements on internal tables: Reads and Loops:

/people/siegfried.boes/blog/2007/09/12/runtimes-of-reads-and-loops-on-internal-tables

Siegfried

12 REPLIES 12

former_member194613
Active Contributor
0 Kudos

you should not uzse header lines anymore, but not for performance reasons.

What you compare is assigning field-symbols and INTO wa, the assigning is faster, if the table is

width.

BUT the main effect is the usage of BINARY SEARCH or Sorted or hashed table in the inner table.

If you forget this one, you performance will become very poor.

Siegfried

0 Kudos

I am already doing the follwoing stuff in program:--

1> Read with binary search.

2> In program i am sorting the table before binary serach......if i use internal table of type sorted in my case will it improve my performance.

former_member194613
Active Contributor
0 Kudos

LOOP at itab1
   READ TABLE itab2    BINARY SEARCH

Sorted tables are not faster than BINARY SEARCH, but more convenient.

You must sort only itab2, not itab1.

And sort only once.

If you need a LOOP for itab2, because there is no 1:1 relation, then Sorted tables are much better.

If the relation is 1:1 and you can define a unique key for itab2, then a hash table is better and faster.

see pictures in that blog!

Measurements on internal tables: Reads and Loops:

/people/siegfried.boes/blog/2007/09/12/runtimes-of-reads-and-loops-on-internal-tables

Siegfried

Former Member
0 Kudos

Can we speed up :--

1> Reading the data data from dataset

2> Transfer the data to dataset

0 Kudos

If you can split it up into smaller chunks, that could help, but otherwise...

Rob

Former Member
0 Kudos

1) use field symbol

2 sort internal table,

3 then binary search the internal table

former_member192616
Active Contributor
0 Kudos

Hi,

>

> I am reading data from appication server file.

are you talking about OPEN DATASET and READ DATASET?

Is this the time consuming part? Have you made some analysis?

And could you post the results and the code?

Without more information it is quite hard to help...

Kind regards,

Hermann

former_member194613
Active Contributor
0 Kudos

if the data are on file you must read them from file and process them.

there are other options from which you can read faster, ... but if the data are not stored there, it makes no sense.

Overall, your action sounds like a one-time action, so just do it and it is sooner or later done.

Siegfried

former_member207438
Participant
0 Kudos

Where are you experiencing a performance problem? Do you have 20 thousand, 20 million or 20 billion records in your 20000K file?

Are you doing the following?


open dataset your_file_input.
open dataset your_file_output.

do.
  read dataset your_file.
 
  if sy-subrc ne 0.
    exit.
  endif.

  " Process your record
  Some complex logic goes here

  " Write your record
  transfer lv_record to your_file_output.
enddo.

close dataset your_file_input.
close dataset your_file_output.

0 Kudos

hello EDWARD,

I am doing the same thing...................and teh number of record as in mentioned is 20000K.

0 Kudos

Hi Sanju,

could you make a trace and send the hitlist, please?

Kind regards,

Hermann

Former Member
0 Kudos

CLOSING THREAD ALTHOUGH COULD NOT GET PROPER ANSWER