cancel
Showing results for 
Search instead for 
Did you mean: 

I am in the middle of the replication , JEST table is using one single . How to change it to parallel processing.

Former Member
0 Kudos

Hi

i have an issue here.

Currently , JEST and CKMI1 tables replication is going on, each table has 400M records,  its so huge.

CKMI1 is running with multiple processes.  JEST is using one DIA processes in ECC.

i can see in  LTRC , CKMI1 table is using  "parallel processing" mode, JEST has "single processing mode".

how to change JEST table to use "parallel processing".

Krishnarjun

Accepted Solutions (0)

Answers (2)

Answers (2)

FCI
Active Contributor
0 Kudos

Hi Krishnarjun,


Performance options in transaction IUUC_REPL_CONTENT allows you to parallelize the data load of one table.


You can modify this option (number of processes) even when the data load is currently running (it should take effect after a while.


Regards,

Frederic

Former Member
0 Kudos

HI Frediric


IUUC_REPL_CONTENT table is not there in SLT system .as we are on SP9.



Krishnarjun

tobias_koebler
Advisor
Advisor
0 Kudos

Use transaction LTRS! The name before was IUUC_REPL_CONTENT

Former Member
0 Kudos

HI Tibias

I sent an email to you couple of days back.

anyway . I have set the LTRS  as read mode 4 entry for JEST table.  and i did not make any entry in IUUC_PRECALC_OBJ.

Now i can not check the statistics of the table in LTRC tr.

i can see in HANA as the table gets updated very fast .

Going forward, Can you please suggest me what is to be done for QUALITY system testing.

Want to upload 552 tables.

30  tables are more than 50 GB with more than 300 Milllions records on each table.

ECC is 9 TB system(40 DIA , 20 BTC processes).

SLT configured (40 BTC)

HANA 1 TB system.

In the current system, i used CSV file for replication for all 552 tables at one shot.

This has been running since 2 weeks.

SOme tables are running with single process, some are parallel process.

How to make sure all the tables are going in parallel mode ?

DO you want me to create entry for  all 30  tables in LTRS tr with read mode 4 or 5 ?

rest of other small tables can go in one shot in CSV file ?

Krishnarjun

tobias_koebler
Advisor
Advisor
0 Kudos

Hi,

I do not understand your request and your question. You are operating with 40 BGD jobs and had 552 tables in one shot? How should a parallelization work, when you have less jobs than tables?

You can only have one global setting for the RT for the whole configuration -> set in LTR "ressource optimize" for RT3 or "performance optimize" for RT4/5. The subset of tables you do not want to be in this RT you have to override in LTRS or start the first group with performance options and switch to ressource afterwards and start the second group.

For the start you can use csv upload or just copy&paste of a list in the data provisioning UI of LTRC.

Best, Tobias

PS: I get every week more than >70 mails with questions. Most of them can be answer if the sender would just read the system docu. I do my best, but I cannot answer all mails. Mails from gmail-accounts will not be processed - sorry.

Former Member
0 Kudos

Hi Tobias

Thanks for the information. My requirement is to replicate total 552 tables. out of 552, 30 tables are huge.

So i will set "resource optimize" and upload 522 table(small tables)  USING csv file. AFter completion of initial load of this set, I will set "performance optimize" and upload 30 tables CSV file.

Let me tell you about my understanding on the big tables... Once i set "performance optimize" for 30 tables, will it use "parallel processing" automatically ? or do i have to set manually in LTRS for 30 tabels?

Basically i want to utilize all the processes and finish the load fast.

Now what i used is, unloaded all 552 tables using CSV file and it took so many days. some tables used single DIA processes, some tables used multiple processes. some used DB , did not show up in SAP SM50.

I have one more problem facing now, some time back , i have suspended  some tables then it was showing as "load/replication blocked" in the LTRC.

then i resumed the tables, In HANA, it was showing "Resume", "scheduled".

In LTRC, its stil showing "load/replication blocked" .

i dont know how to bring the tables back to replciation mode? i tried using resume from LTRC and HANA . it did not work.

Do i need to remove the "L" from the below screen for each and every table ?


Krishnarjun

Former Member
0 Kudos

Hi,

the problem is in your case that you do not have enough BGD job to handle 500+ tables with parallel initial load (you would need 1000 BGD jobs if you want to load all of them with 2 jobs parallel). If I need to load bigger amount of tables at the same time I try to raise the data transfer jobs to the maximum and keep the initial load jobs max-1 to have at least one job what would handle the real time replication and also recreate the objects after initial load is ready and get them moved into replication mode.

You have the option to control the sequence of the initial load for the tables. You can do the setup in LTRS. I try to load always the smaller tables at the beginning, which will leave your big 30 tables to the end. At that point all 40 jobs should handle the 30 big tables and parallel load will come automatically for you but if all 30 are running at the same time, maximum 10 table will be loaded with 2 jobs and 20 with 1. This is handled by SLT.

I also try to split the load, not to trigger all at the same time. It will not load them all faster, but it is easier to monitor and react in case some issue happens.

The Block Processing Step should not be change by you. Normally the system is blocking the replication until all objects are not finished, for example the calculation phase. If that is not done, block stays. I have had some cases in the past where the block was not removed properly, but that is very seldom.

And just be patient, 552 tables are a lot to load and 40jobs will require time to handle all.

BR
Tamas

Former Member
0 Kudos

Hi Tamas

In my next round of testing, i will upload all my small tables by using CSV file in HANA>

Next the 30 big tables, I will upload one by one.

e.g , after small tables uploaded, i will upload CKMI1 table which is big table. how to assign 30 processes for that table .

Once that CKMI1 is finished , i will upload next big table.

But i want to utilize all my work processes whatever has been assigned .

I want all big table to run parallel processes with more number of work processes. i have seen in my previous run, some big table were running with single process.


Please let me know how to assign parallel process with 20 work processes for each table. I will run one after another.


Krishnarjun

Saritha_K
Contributor
0 Kudos

Hi,

Please specify your dmis version.

You can set it to parallel processing by trying below technique-

LTRS=> select your MT ID-> tab performance options->add your table.

1.  Mention NO OF PARALLEL JOBS (based on background processes available in your system)

2. SEQUENCE NUMBER (Table with low numbers are loaded first and get more priority)

3. specify reading type based on your type of table-

Reading type  and description

1. Access Plan Calculation     

     An access plan is used for reading the table; the data is split into portions which are then loaded into the target  system. 

2 - Pool tables

     Similar to the access plan reading type but the entire table is  read as one portion.

3 - DB_SETGET (Cluster tables)          

     Default option. Uses the function module DB_SETGET to fetch a predefined number of entries from the source table. These entries are ordered by their                  primary keys.

4 - INDX CLUSTER (IMPORT FROM DB)

    Opens a database cursor in the source system and copies  the data from the source table to an INDX table on the source system. The entries of the INDX table  

    are then replicated ordered by their indices.

5 - INDX CLUSTER with FULL TABLE SCAN

     Similar to the former type 4 but forces a full table scan.  Note that this reading type is not suitable for cluster tables.

6 - INDX CLUSTER filled from external 

     The INDX table is filled from an external report. Note that this reading type is not suitable for replicating data to SAP HANA systems.

7 - INDX CLUSTER child table FTS       

     This reading type is not relevant for replicating data using  SAP Landscape Transformation Replication Server.

     Hope this helps

Regards.

Saritha K

Former Member
0 Kudos

Hi Saritha

I have added the table in LTRS  and set as reader mode 5. then deactivated the SLT run and  activated. Still its not taking.

can i delete JEST table and reload with parallel process ?

Former Member
0 Kudos

HI

I have changed it  to the read mode 4. now its showing parallel mode and some records got inserted . Later it failed and giving the below error. Seems some duplicate error.

how can i delete the table and start load again with parallel process?

Message textZ_JEST_009: Run 0000000038 aborted due to DUPLICATE KEY on 20150921 at 105901