cancel
Showing results for 
Search instead for 
Did you mean: 

PUSH or PULL what is more efficient

Former Member
0 Kudos

Hello,

I have a requirement where I have to mobilize data from a table that has 2 million records. I have thought and here are the options that I have:

1. Create my Backend Adapter as a DOE Triggered and pull these records. Problem is that time taken to process these many records will be very high(days?)

2. Create my Backend Adapter as a Backend Triggered and PUSH these records from backend. Problem is that it will still take a lot of time(4-6 Hrs to compare and update). Also I am not sure if due to this backend PUSH it will send change messages to all the devices as well

3. Use Change Pointers in backend to identify what has been created/changed and PUSH only selected records. Problem is that if any other application reads this object, the change pointer will be lost

4. Create an IDOC when an object is created/changed and send this IDOC to mobile system where we have to write an Inbound Handler to process this IDOC

5. Create an IDOC when an object is created/changed and send this IDOC to mobile system as XML where we will parse the XML and PUSH data into Data Object

6. Start a workflow when the object is changed and the workflow calls the PUSH function to PUSH that instance in a background step. Problem is that if the connection to mobile server is not working, I will loose this update

I want to know which in your opinion is the best option that will not put much load on the backend as well as mobile server and will provide better performance.

Regards,

Shubham

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

How about this:

Write an API class which is tasked with doing ALL DB interaction (Create/Update/Delete) and expose it. Don't expose the DB, and therefore ensure that all 'applications' changes data only thro an API call.

In the API class, you will know exactly what is changing, so you could queue PUSHes to DOE (in a tRFC queue) from there.

Thats about it. You have guaranteed delivery of deltas, and minimum processing.

As to devices: There really is no difference in behavior. i.e. The devices will never be affected by which way you are calculating deltas, push or pull.

Edited by: Arjun Shankar on Jan 8, 2010 5:23 AM

Former Member
0 Kudos

Hi Arjun,

I am not sure if I understood the proposed solution. I am not clear on how to update everything by my API since we are dealing SAP Standard Data - Purchase Orders, Invoices, material movements - which are updated by SAP Standard transactions.

Any view on using IDOCs/Workflows

Regards,

Shubham

Former Member
0 Kudos

I did not know that. So this means you have no control over who updates the instances.

I don't know what an IDOC is. Maybe I should read up on it

Anyway, are there exits available which are called upon changes to the data you are interested in? If yes, then these will be guaranteed to be called each time data is updated.

If there are such exits, then you simply need to PUSH your delta in the exit implementation (which you will have to do).

What I am trying to say is: First find the place where you can listen in on any change done to data. THEN push deltas from here to the DOE in a way that is guaranteed to reach. One way is queues. Post the delta to a queue (from your exit). This way, when the queue is run, your deltas will be sent in order and will be guaranteed to reach DOE.

Former Member
0 Kudos

I would have loved if life would have been so simple

The problem is these standard objects can get updated from lot of places. The bigger problem is that customers normally have their own implementations of the BADI's and Exits in these standard objects. So if I say that we will also put some code in these exits, it might not be an acceptable solution for them as this will involve a lot of testing effort and has a risk of breaking something that is working perfectly.

Regards,

Shubham

Former Member
0 Kudos

This is an interesting problem. What you are saying is that you don't really have a way to know for sure that you have caught every single delta in the backend.

In any case, avoid pulls at all costs.

I just googled 'IDoc'. Looks like the kind of thing you should be using, at first glance. I'll get back to you when I've learn't more. Or hopefully someone else on this forum will be able to give a good solution!

Answers (1)

Answers (1)

Former Member
0 Kudos

DOE-triggered is a no-no when it comes to records of this size.

Every single time you do a delta, ALL your records will get fetched and compared.

=> If you configure pulls on a daily basis, you'll be reading 2 million records every day, to fetch a much smaller number of deltas.

Edited by: Arjun Shankar on Jan 8, 2010 5:19 AM