cancel
Showing results for 
Search instead for 
Did you mean: 

How to improve Initial Download of Customers from ECC to CRM?

Former Member
0 Kudos

Hi All,

We have a huge customer base of over 1 million in ECC and we want to replicate all of them to CRM system thru initial download.

So wanted to know what are the ways to improve the performance of the middle ware initial download of customers from ECC to CRM.

Is parallel processing possible with initial download or any others ways if that is not possible to improve the performance.

Looking for help.

thank you

Ramakanth

Accepted Solutions (1)

Accepted Solutions (1)

Plz go thru this 

1.     Challenges

The many hurdles potentially posed by a conversion cutover include conversion of data between SAP systems in the landscape. When SAP CRM is in the landscape, SAP CRM middleware can be widely used for the exchange of data between SAP CRM and the other connected SAP systems such as SAP ERP Central Component (SAP ECC), SAP NetWeaver BW, SAP Supply Chain Management (SAP SCM), and mobile devices (Figure 1).

The focus of this article is on optimizing SAP CRM middleware data exchange between SAP ECC and SAP CRM systems, which if handled properly can significantly reduce the time that SAP CRM middleware replication consumes. Henceforth, in this article, SAP CRM middleware is referred to as middleware.

2.     The Source System

Extraction of data in the source system can be slower as data is selected sequentially based on the adapter block size. You can improve this by splitting the data into multiple chunks and running the load in parallel. The advantage can be that data extraction could occur in parallel in the source system depending on the middleware parameter setting that specifies the number of request loads that can be run in parallel. You can make the configuration settings that allow multiple requests to run in parallel in table SMOFPARSFA as shown in Figure 2.

Figure 2

Let’s look at an example for running parallel requests: A SAP CRM system has 10 million business partners starting from number range 1 to 10,000,000 that should be replicated from SAP CRM to SAP ECC. Create 20 requests each containing 500,000 business partners (1 to 500,000, 500,001 to 1,000,000, and so on). These requests can be run in parallel using transaction R3AR4. Configuration in table SMOFPARSFA facilitates running multiple requests at the same time.

3.     The Target System

The posting of data in the target system can be slower as queues are processed sequentially. This can be handled by increasing the number of outbound queues being created in the source system. This way the target system can process multiple queues in parallel, which may improve the performance of the load. If the source system is SAP ECC, maintain the parameter in table CRMPAROLTP specific to the adapter object as shown in Figure 3. The field Param. Name 2 holds the adapter object name. It should be maintained for each adapter object that needs queues to be processed in parallel. This is referring to business partner replication, so BUPA_MAIN is maintained in Param. Name 2 field as the adapter object and value 50 in Param.Value field.

Figure 3

If SAP CRM is the source system, maintain the parameter in table SMOFPARSFA specific to the adapter object as shown in Figure 4. BUPA MAIN is maintained in Param. Name 2 field as the adapter object and value 50 in Param.Value field.

Figure 4

4.     Block Size

Parameter to size the parallel number of request load SMOFPARSFA

Let’s look at an example for running parallel requests: A SAP CRM system has 10 million business partners starting from number range 1 to 10,000,000 that should be replicated from SAP CRM to SAP ECC. Create 20 requests each containing 500,000 business partners (1 to 500,000, 500,001 to 1,000,000, and so on). These requests can be run in parallel using transaction R3AR4. Configuration in table SMOFPARSFA facilitates running multiple requests at the same time.

The posting of data in the target system can be slower as queues are processed sequentially. This can be handled by increasing the number of outbound queues being created in the source system. This way the target system can process multiple queues in parallel, which may improve the performance of the load. If the source system is SAP ECC, maintain the parameter in table CRMPAROLTP specific to the adapter object as shown in Figure 3. The field Param. Name 2 holds the adapter object name. It should be maintained for each adapter object that needs queues to be processed in parallel. This is referring to business partner replication, so BUPA_MAIN is maintained in Param. Name 2 field as the adapter object and value 50 in Param.Value field.

Size the parallel number of queues in SAP ECC

If SAP CRM is the source system, maintain the parameter in table SMOFPARSFA specific to the adapter object as shown in Figure 4. BUPA MAIN is maintained in Param. Name 2 field as the adapter object and value 50 in Param.Value field.

Size the parallel number of queues in table SMOFPARSFA

The above two options can be combined to run multiple request loads in parallel, which creates multiple outbound queues. Both the source and target systems’ processing performance tends to depend on a common parameter at the adapter level, the block size. The number of objects in each queue is determined by this block size. Let’s analyze this with an example.

Take the earlier example of replication of 10 million business partners from SAP CRM to SAP ECC through 20 parallel requests. You create 20 requests, each containing 500,000 business partners. Before running the request load, set the block size in BUPA_MAIN adapter level to 1,000. These 20 requests, when run in parallel, can span multiple queues each containing 1,000 business partners. It may not be advisable to make the block size greater than 1,000 as the commit size of the logical unit of work (LUW) will increase, which in turn can cause trickle effects on performance. Also when mass changes are made in the system to business partners, the block size 1,000 determines that 1,000 business partners are bundled in a LUW and sent to the target system.

The parameter values mentioned so far are just examples. Multiple runs should be carried out by varying these values to help determine the appropriate parallel request, the required queues, and the adapter block size. The number of parallel queues generated should also be controlled, which tends to slow down the system performance and could cause resource bottlenecks by occupying available work resources.

5.     Queue Naming Convention

The queues are processed by the queue scheduler based on first in, first out. If more queues are created by an initial load/mass delta changes, more time can be consumed to process the queues based on available resources (work processes). The next step should be to control the creation of individual queues through a queue naming convention. The creation of queues can be controlled by how many positions from the object ID or queue ID are important for the queue name. In SAP CRM, the queue naming parameter is maintained in table SMOFQFIND and in SAP ECC it is maintained in table CRMQNAMES. The field LENGTH typically controls the number of relevant positions for the queue name and the field FLDOFFSET typically sets the position. Let’s walk through an example to understand this in detail.

Take the same scenario as above. When an initial load is run in SAP CRM to replicate business partners to SAP ERP with the maximum number of queues as 50, thousands of queues are created. However, only 50 queues are processed by the queue scheduler in parallel at any given time. The rest of the queues wait for predecessors to be processed.

The queues are generated as:

CRI_CSA_BUPA_INITR3AU00

CRI_CSA_BUPA_INITR3AU01

CRI_CSA_BUPA_INITR3AU02

… until

CRI_CSA_BUPA_INITR3AUNN

Now change the queue naming convention for Business Partners. As shown in Figure 5, the Field Offset is changed to 9 and Internal Length to 1 and the initial load is processed again.

Figure 5

The queues are generated as:

CRI_CSA_BUPA_INITR3AU00

CRI_CSA_BUPA_INITR3AU01

CRI_CSA_BUPA_INITR3AU02

Parameter for naming queues

Now only 10 queues are generated instead of thousands of queues. Since the Field Offset is set to 9 and Internal Length to 1, the system generates 10 unique queues first. After that the business partners belonging to the new queues are added to the existing queue whose last digit is similar.

… until

CRI_CSA_BUPA_INITR3AU09

When CRI_CSA_BUPA_INITR3AU10 must be created as a new individual queue, the system does not create it. Rather the business partners of this queue are written to queue CRI_CSA_BUPA_INITR3AU00 as the last digits of these two queues are similar.

This ensures that thousands of queues are not created, therefore improving the queue processing time. With the middleware parameters fine tuned to handle a mass load, the system parameters should also be fine tuned to handle the middleware load. Resource allocation and logon group settings can also play a critical role.

6.     Hardware Resources Allocation

If resource allocation is not managed properly during an initial load, processing of queues can occupy the system resources and eventually bring down the system, not allowing users to log in. To help avoid this, you can define a remote function call (RFC) server group in both the source and target systems, allocate resources to it, and assign this RFC server group to the queue scheduler.

This ensures queues are processed only through the resources allocated to the RFC server group and are not spread across the system. Maintain the allocation of resources using transaction RZ12 (Figure 6). Resources can also be allocated from more than one application server. Usually if more application servers are present, it is a good practice to use more of those resources and use fewer resources from the central instance.

Figure 6

Explanations of each of these parameters shown in Figure 6 can be found in SAP Note 74141. These parameters should be fine-tuned based on load type. For instance, during an initial load when there are fewer users accessing the system, middleware can use most of the work processes. During an ongoing delta load, the resources should be well balanced between the middleware and online users.

Before changing the server groups in the queue scheduler, confirm that the scheduler status is inactive (Figure 7). If there are no pending queues to be processed, the scheduler will be in inactive status. If there are pending queues to be processed, deregister the queues by clicking the toolbar Deregistration button to make the scheduler inactive.

Figure 7

7.     RFC Server group resource allocation

With these configuration settings mentioned so far, you can manage the middleware for various types of load (initial/delta/request). The values of the parameters can vary based on various factors that should be tested by performing multiple runs of data exchange. Also, these should be fine-tuned in accordance with the database performance.

Answers (4)

Answers (4)

former_member320292
Active Participant
0 Kudos

SMOFPARSFA-Maximum paralle process-seems like it helps setting up multiple runs for download

0 Kudos

Plz go thru this 

1.     Challenges

The many hurdles potentially posed by a conversion cutover include conversion of data between SAP systems in the landscape. When SAP CRM is in the landscape, SAP CRM middleware can be widely used for the exchange of data between SAP CRM and the other connected SAP systems such as SAP ERP Central Component (SAP ECC), SAP NetWeaver BW, SAP Supply Chain Management (SAP SCM), and mobile devices (Figure 1).

The focus of this article is on optimizing SAP CRM middleware data exchange between SAP ECC and SAP CRM systems, which if handled properly can significantly reduce the time that SAP CRM middleware replication consumes. Henceforth, in this article, SAP CRM middleware is referred to as middleware.

2.     The Source System

Extraction of data in the source system can be slower as data is selected sequentially based on the adapter block size. You can improve this by splitting the data into multiple chunks and running the load in parallel. The advantage can be that data extraction could occur in parallel in the source system depending on the middleware parameter setting that specifies the number of request loads that can be run in parallel. You can make the configuration settings that allow multiple requests to run in parallel in table SMOFPARSFA as shown in Figure 2.

Figure 2

Let’s look at an example for running parallel requests: A SAP CRM system has 10 million business partners starting from number range 1 to 10,000,000 that should be replicated from SAP CRM to SAP ECC. Create 20 requests each containing 500,000 business partners (1 to 500,000, 500,001 to 1,000,000, and so on). These requests can be run in parallel using transaction R3AR4. Configuration in table SMOFPARSFA facilitates running multiple requests at the same time.

3.     The Target System

The posting of data in the target system can be slower as queues are processed sequentially. This can be handled by increasing the number of outbound queues being created in the source system. This way the target system can process multiple queues in parallel, which may improve the performance of the load. If the source system is SAP ECC, maintain the parameter in table CRMPAROLTP specific to the adapter object as shown in Figure 3. The field Param. Name 2 holds the adapter object name. It should be maintained for each adapter object that needs queues to be processed in parallel. This is referring to business partner replication, so BUPA_MAIN is maintained in Param. Name 2 field as the adapter object and value 50 in Param.Value field.

Figure 3

If SAP CRM is the source system, maintain the parameter in table SMOFPARSFA specific to the adapter object as shown in Figure 4. BUPA MAIN is maintained in Param. Name 2 field as the adapter object and value 50 in Param.Value field.

Figure 4

4.     Block Size

Parameter to size the parallel number of request load SMOFPARSFA

Let’s look at an example for running parallel requests: A SAP CRM system has 10 million business partners starting from number range 1 to 10,000,000 that should be replicated from SAP CRM to SAP ECC. Create 20 requests each containing 500,000 business partners (1 to 500,000, 500,001 to 1,000,000, and so on). These requests can be run in parallel using transaction R3AR4. Configuration in table SMOFPARSFA facilitates running multiple requests at the same time.

The posting of data in the target system can be slower as queues are processed sequentially. This can be handled by increasing the number of outbound queues being created in the source system. This way the target system can process multiple queues in parallel, which may improve the performance of the load. If the source system is SAP ECC, maintain the parameter in table CRMPAROLTP specific to the adapter object as shown in Figure 3. The field Param. Name 2 holds the adapter object name. It should be maintained for each adapter object that needs queues to be processed in parallel. This is referring to business partner replication, so BUPA_MAIN is maintained in Param. Name 2 field as the adapter object and value 50 in Param.Value field.

Size the parallel number of queues in SAP ECC

If SAP CRM is the source system, maintain the parameter in table SMOFPARSFA specific to the adapter object as shown in Figure 4. BUPA MAIN is maintained in Param. Name 2 field as the adapter object and value 50 in Param.Value field.

Size the parallel number of queues in table SMOFPARSFA

The above two options can be combined to run multiple request loads in parallel, which creates multiple outbound queues. Both the source and target systems’ processing performance tends to depend on a common parameter at the adapter level, the block size. The number of objects in each queue is determined by this block size. Let’s analyze this with an example.

Take the earlier example of replication of 10 million business partners from SAP CRM to SAP ECC through 20 parallel requests. You create 20 requests, each containing 500,000 business partners. Before running the request load, set the block size in BUPA_MAIN adapter level to 1,000. These 20 requests, when run in parallel, can span multiple queues each containing 1,000 business partners. It may not be advisable to make the block size greater than 1,000 as the commit size of the logical unit of work (LUW) will increase, which in turn can cause trickle effects on performance. Also when mass changes are made in the system to business partners, the block size 1,000 determines that 1,000 business partners are bundled in a LUW and sent to the target system.

The parameter values mentioned so far are just examples. Multiple runs should be carried out by varying these values to help determine the appropriate parallel request, the required queues, and the adapter block size. The number of parallel queues generated should also be controlled, which tends to slow down the system performance and could cause resource bottlenecks by occupying available work resources.

5.     Queue Naming Convention

The queues are processed by the queue scheduler based on first in, first out. If more queues are created by an initial load/mass delta changes, more time can be consumed to process the queues based on available resources (work processes). The next step should be to control the creation of individual queues through a queue naming convention. The creation of queues can be controlled by how many positions from the object ID or queue ID are important for the queue name. In SAP CRM, the queue naming parameter is maintained in table SMOFQFIND and in SAP ECC it is maintained in table CRMQNAMES. The field LENGTH typically controls the number of relevant positions for the queue name and the field FLDOFFSET typically sets the position. Let’s walk through an example to understand this in detail.

Take the same scenario as above. When an initial load is run in SAP CRM to replicate business partners to SAP ERP with the maximum number of queues as 50, thousands of queues are created. However, only 50 queues are processed by the queue scheduler in parallel at any given time. The rest of the queues wait for predecessors to be processed.

The queues are generated as:

CRI_CSA_BUPA_INITR3AU00

CRI_CSA_BUPA_INITR3AU01

CRI_CSA_BUPA_INITR3AU02

… until

CRI_CSA_BUPA_INITR3AUNN

Now change the queue naming convention for Business Partners. As shown in Figure 5, the Field Offset is changed to 9 and Internal Length to 1 and the initial load is processed again.

Figure 5

The queues are generated as:

CRI_CSA_BUPA_INITR3AU00

CRI_CSA_BUPA_INITR3AU01

CRI_CSA_BUPA_INITR3AU02

Parameter for naming queues

Now only 10 queues are generated instead of thousands of queues. Since the Field Offset is set to 9 and Internal Length to 1, the system generates 10 unique queues first. After that the business partners belonging to the new queues are added to the existing queue whose last digit is similar.

… until

CRI_CSA_BUPA_INITR3AU09

When CRI_CSA_BUPA_INITR3AU10 must be created as a new individual queue, the system does not create it. Rather the business partners of this queue are written to queue CRI_CSA_BUPA_INITR3AU00 as the last digits of these two queues are similar.

This ensures that thousands of queues are not created, therefore improving the queue processing time. With the middleware parameters fine tuned to handle a mass load, the system parameters should also be fine tuned to handle the middleware load. Resource allocation and logon group settings can also play a critical role.

6.     Hardware Resources Allocation

If resource allocation is not managed properly during an initial load, processing of queues can occupy the system resources and eventually bring down the system, not allowing users to log in. To help avoid this, you can define a remote function call (RFC) server group in both the source and target systems, allocate resources to it, and assign this RFC server group to the queue scheduler.

This ensures queues are processed only through the resources allocated to the RFC server group and are not spread across the system. Maintain the allocation of resources using transaction RZ12 (Figure 6). Resources can also be allocated from more than one application server. Usually if more application servers are present, it is a good practice to use more of those resources and use fewer resources from the central instance.

Figure 6

Explanations of each of these parameters shown in Figure 6 can be found in SAP Note 74141. These parameters should be fine-tuned based on load type. For instance, during an initial load when there are fewer users accessing the system, middleware can use most of the work processes. During an ongoing delta load, the resources should be well balanced between the middleware and online users.

Before changing the server groups in the queue scheduler, confirm that the scheduler status is inactive (Figure 7). If there are no pending queues to be processed, the scheduler will be in inactive status. If there are pending queues to be processed, deregister the queues by clicking the toolbar Deregistration button to make the scheduler inactive.

Figure 7

7.     RFC Server group resource allocation

With these configuration settings mentioned so far, you can manage the middleware for various types of load (initial/delta/request). The values of the parameters can vary based on various factors that should be tested by performing multiple runs of data exchange. Also, these should be fine-tuned in accordance with the database performance.

0 Kudos

Hi Naveen ,

Your update was really helpful. But i am unable to view the Pictures/figures that you added .

It would be great if you can attach those figures .

Thanks

Ravi Kummitha

former_member186543
Active Contributor
0 Kudos

Hi Ramakanth,

You should increase the block size to around 200 for this scenario. You can do so in tcode : R3AC1 for object CUSTOMER_MAIN .

In this way a single bdoc will carry data for 200 customers , this will help in increasing performance by creating less bdocs.

/Hasan

former_member205429
Contributor
0 Kudos

Hi Ramakanth,

Pls do a Request load for the selective Range of records .. say if the BP has 10K records,

Create multiple Request loads for the table BUT000 for around 1k BP's.

like wise create 10 such Records.

It works !!

Former Member
0 Kudos

I am looking to see if there is any way we can trigger parallel jobs w/o having to trigger multiple request loads.

thanks,

Ramakanth