cancel
Showing results for 
Search instead for 
Did you mean: 

Pegging profile run - technical settings - recommendations / best practice

moti_shakutai
Explorer
0 Kudos

Hi,

We would like to use the pegging profile which has advanced technical settings ( transaction PEGRP ) to improve performance (I think this was first introduced in ECC6 EHP5).


The SAP online help (F1) is not sufficient and we will be more then happy to get tips or recommendation and best practice on the following technical parameters:

1) Update: wait for update - what is the influence of this parameter , when to use it and when not.

2) Technical Settings: Package size - what should be the value ? is it depending of the size of any tables ? which tables ? what size (number of records), For all entries size, Use RESB index.


We would like to receive info on the values you used and based on the size of the run, and pegging tables size such as RESB


Thanks

Moti Shaked-Shakutai

Accepted Solutions (0)

Answers (1)

Answers (1)

moti_shakutai
Explorer
0 Kudos

Hi,

I opened a message to sap GPD support' the follwoing is their comprehensive reply:

The new pegging run profile was introduced along with the new pegging

program in EhP4.

The "Wait for update" setting (if turned on) has an effect at the very

end of the pegging execution (new pegging); that is it forces the systemto execute a local update task instead of the ordinary update task.

The effect is that when pegging executes the database updates, it is

done in a synchroneous mode. Pegging execution waits until the database

updates are finished.

This makes pegging to run longer, but when it finishes the user can be

sure that the database is updated.

Usage of this flag makes sense when pegging executed as a batch process

starting the job when a previous pegging job finishes. In this case

the second job can be sure that the database is in an updated state

already.

Not using this flag makes pegging end sooner, since the database update

happens in a separate process, not holding up pegging execution.

You can find all relevant coding in report RPEGALL2 if you search for

"gs_peg_run_prof-locupd".

Setting this flag depends on your business needs.

Package size sets the rowcount that the database server sends back

as a reply to SELECT statements. If the data fetch provides more rows

than the package size, these packages are fetched one after another.

The idea bahind is to fetch smaller data volume from the database and

process it, then continue processing the next package. This lowers

memory consumption compared to the case when all data gets fetched at

once and get processed.

You can find all relevant coding in report RPEGALL2 if you search for

"gv_pckgsize".

Package size setting depends on your performance needs. The higher the

package size is, the more data is processed at once, therefore more

memory is used. On the other hand, low package size lowers memory

consumption but raises the runtime due to the higher number of

roundtrips between the application and database server.

Please experiment with different settings and find the one that suits

you the most (best runtime & no memory allocation error).

For all entries is again a technical setting. SELECT statements

are used in several cases having the FOR ALL ENTRIES clause. Depending

on the data volume contained in the FOR ALL ENTRIES table the runtime

of the SELECT statement can differ. At a certain amount of data the

runtime can go up exponentially. This can be prevented by setting the

for all entries size. If the for all entries table has more rows than

the defined runprofile setting, the SELECT statement will disregard the

FOR ALL ENTRIES clause, fetch all records and execute the necessary

data filtering on the application server instead of the database server.

You can find all relevant coding in report RPEGALL2 if you search for

"gs_peg_run_prof-forallcnt".

It is not (and cannot be) recommended by SAP to define an additional

index on RESB, since again it needs a custom specific applicability

analysis. In a sandbox system of yours you can experiment with

executing pegging having defined the index as recommended in the F1

help, and compare with an execution not having the same database index.

Also, if you define the index, the whole system performance needs to be

analysed as well, since RESB is a central table of all processes.

You can find all relevant coding in report RPEGALL2 if you search for

"gs_peg_run_prof-resbindx".

Memory deallocation is a way to lower memory consumption while pegging

is executed. Just to give you an idea, internal tables' memory

allocation depends on the highest number of rows that have ever been

in the internal table in this session, and not the current rows

contained. This means that the allocated memory never goes down,

basically using up system memory without really using the allocated

resource. This is a typical working mechanism of netweaver; the

garbage collector never frees up allocated memory from internal tables.

To overcome this, a technical solution can be executed, which lovers thealocated memory size to the current size of internal tables, but this

happens on the expense of runtime.

You can find all relevant coding in report RPEGALL2 if you search for

"gs_peg_run_prof-dealloc".

All these performance related settings are affected by many things. The

database server used, the amount of memory the application server has,

the network connection, etc. Therefore there is only a default setting

provided in the run profile and it is up to the customer to define the

best settings.

All customers use different settings in the run profile best fitting

their needs, therefore a standard recommendation cannot be made.

(Whatever makes better performance results at a certain customer might

be worsening performance at an another client.)

Please experiment with different settings together with performance

experts and technical employees.

Thanks

Moti Shaked-Shakutai