cancel
Showing results for 
Search instead for 
Did you mean: 

Replication of Measuring points from Backend to Middleware

Former Member
0 Kudos

Hi! everyone

We did delete the measuring points on the middleware system. In order to repilcate them from the backend to the middleware system we did follow Note 601548

1. Checked Type of Sync BO

T01 - Timed 2 way and T51 – Backend Driven 2 way

2. MEREP_PD --> I did set the flag "enabled" to off

3. SE37 --> MEREP_RDB_T01_RESET --> MAMA25_041 (on the middleware)

4. SE38 --> REALM_ME_MEASP_FULL_DOWNLOAD_SD

we did check the box Complete Update Deletion

date since: 01011998 (long time ago)

Section Variant: Our Variant for Measuring Points

5. After I did that I did verify the information on Middleware:

SE16

MEREP_207

Escenario: MAM25_041

STRUCTID: TOP

Click Number of entries

And I did got a few registers

6. I did compare the numers of Measuring points on both system and the diference is huge.

We are getting the following error on the Middleware system:

RFC_MAM 020 C STORAGE_PARAMETERS_WRONG_SET

RFC_MAM 020 C STORAGE_PARAMETERS_WRONG_SET

After WE got the error we did re-start the system in order to refresh memoy but we continue with the same problem.

Can any one tell me what should we do ??

Thank you very much

Xiomara

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi!

We did run the job in background mode and this one worked a little more. But now we got the following erro on the backend system:

Runtime Error TSV_TNEW_PAGE_ALLOC_FAILED

Date and Time 07.02.2008 09:16:57

We will apply the note 104080 and we will let you know the result.

If you have any other idea it will be welcome.

Thank you very much for your help

Have a nice day

Xiomara

Former Member
0 Kudos

Hi Xiamora,

this is as well an error related to the amount of memory used by the job. Well, please see if the notes for mass data replication are in the system and activated. This will then send the data in junks, so that the memory error should go away.

Regards,

Oliver

Former Member
0 Kudos

Hi Oliver,

Regarding your below reply, I don't think mass data Notes makes any difference as Server-driven is Used.

-


>> this is as well an error related to the amount of memory used by the >> job. Well, please see if the notes for mass data replication are in the >> system and activated. This will then send the data in junks, so that >> the memory error should go away.

-


As Preetham was saying, if there are many measurement points in B/E, you can send them in batches. However, you need to create set of smaller variants in the report "RALM_ME_MEASURMENT_POINT_LIST" and give them as an input in the program "RALM_ME_MEASP_FULL_DOWNLOAD_SD" .

The important thing is that the data returned when you execute these variants should be subset of the data returned by the variant mentioned in SPRO(Transaction ALM_ME_GENERAL).

Hope this helps.

Best Regards,

Subhakanth

Answers (2)

Answers (2)

Former Member
0 Kudos

Hi! Oliver

After we did run the job in background and enable the flag for MAM25_041 we started to get information on the Middleware system even though we got the error on the Backend as I said on the message before.

We already applied the NOTE104080 too.

Thank you very much for your help

Former Member
0 Kudos

Hi,

The problem is more to do with the amount of data. The maximum items that a single syncbo instance can hold is 10,000 and also if the size of the internal table exceeds the limit in the memory you get such errors. The only suggestion i would give is to send data in smaller packages. In such cases the replication jobs will run for longer duration.

Thanks..

Preetham S

Former Member
0 Kudos

Hi Xiamora,

to sllve that I have a few questions:

1.) have you set the ENABLED flag for MAM25_041 BO back to enabled? (you need to do that prior to the replication)

2.) Run the replicator job on the backend. The FULL DOWNLOAD..... is not necessary. Usually just start the job. This will transfer data to table:

MEREP_DELTABO in the middleware. Please start the job in Background mode - do not run it in the UI! You should see a hughe number of items after a short while in the DELTABO table. These are then processed in junks and transfered to MEREP_207. In Deltabo you just see the KEY of the Items. So you search only for MAM25_041 in deltabo and this gives you the number of items already transfered. A job will then transfer these items from Deltabo to 207 - this job can be monitored in SM36 - and it will start usually after 5min. The Full download with delete gives you no benefit, cause your MEREP_207 is already cleared anyway. So save the time and the overhead of data.

If you run the replicator job in the backend it uses the selection criteria you have specified in SPRO to download the items. So if items are missing, please check the selection criteria as well.

Running the job in background should avoid the error. But if not, you can check NOTE 1040480 in addition - but I guess the problem you have is not covered with that one.

Well, lets see if you still have that error if you follow that route. If this is the case, please come back to us.

Hope this solves the issue!

Regards,

Oliver