Skip to Content

Archived discussions are read-only. Learn more about SAP Q&A

ABAP - CUA : Initial load : too many records in the CUA?

We are running :

SP03 for IDENT. CENTER DESIGNTIME 7.1 Patch 0

SP03 for IDENTITY CENTER RUNTIME 7.1 Patch 1

SP03 for NW IDM IC UIS 7.00 Patch 1

We have connected our customer's CUA system to IdM : we created an Identity Store called 'SAP_Master', created the CUA repository, defined the required attributes as defined in the guide 'IDM 7.1 - IdM For SAP Systems - Configuration 7-3.pdf', created the jobs based upon the templates etc. The dispatcher used has 'run provisioning jobs' disabled.

On our sandbox server, when we connect to our sandbox CUA system (CUA_YS5_200), everything is ok, the 'AS ABAP - Initial load' job with only 'ReadABAPRoles' enabled, runs fine.

On our QA system, when we connect to our 'production' CUA system (CUA_YP0_400), the 'AS ABAP - Initial load' job with

only 'ReadABAPRoles' enabled, finished with message 'could not start job, rescheduling'. Since there is a huge number of records (we looked it up in the system : 311.679 records), we decided to switch on parameter 'bootstrap job'. Now the result is that it takes forever, after half a day the job is still running. In the database, table 'sapCUA_YP0_400role' is still completely empty (no records). Therefore, it seemed interesting to connect our QA IdM system to our development CUA system (CUA_YS5_200). After a while, the exact same job has finished and table 'sapCUA_YS5_200role' contains 18.580 records.

After some additional testing, we might have discoved the cause of the issue could be that the number of records in our CUA is too big.

In the java code of the fromSAP pass there are 2 calls to the SAP system for reading the roles into Idm. The first one reads table USRSYSACT (311.000 records), the second one reads table USRSYSACTT (1.000.000 records). All these records are stored into a java hashmap - we think that 1 million records exceeds the hashmaps capability, although no java error is thrown.

When we debug the functionmodule RFC_READ_TABLE and change the rowcount to 100.000 then everything works fine. When we set the rowcount to 200.000 the java-code does not generate an error but the job in idm never

ends...

When running functionmodule RFC_READ_TABLE in the backend system the 1.000.000 records are processed in less than one minute. So apparently, the issue is related to the processing in the Java code.

Java Dispatcher heap size is set to 1024.

Anybody already came accros this issue?

Thanks & best regards,

Kevin

Helpful Answer

by
Not what you were looking for? View more on this topic or Ask a question