on 10-10-2008 12:14 PM
Hi
I am currently Implementing RDA for 0CO_OM_CCA_9.Since there is a safety delta for this data sources (SAP Recommends 2 hours).
Request your valuable inputs to answer the below Questions
1.The Daemon will read the data from delta Queue so what is the advantage of implementing RDA when there is Safety delta?
2.In this scenario What will happen to the data extraction ?
Thank you
PKC
Hi PKC,
We would like to use RDA on several CCA data sources.
In particular we want to set up a RDA 'proof of concept' for data source 0CO_OM_CCA_9.
When I check the datasource properties, it says: 'Real-Time Data Acquisition not supported'.
Did you manage to implement RDA for this source?
Can you please share your foundings with me?
Thanks a lot!
Kind regards
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Mr V.,
Thanks for the information!
This program makes the data source RDA-enabled from a technical point of view.
But I am very interested in what to do with the loads that take more time than the frequence of the RDA loads. Moreover what is the approach used most often: using a separate 'RDA-flow' next to the original CCA data flow, or making the original CCA data source RDA enabled?
Thanks in advance for the help.
Kind regards,
Bart
Thanks for your reply Simon.
Does this mean you don't use RDA?
Currently we are doing nightly jobs, but the aim is to have additional RDA.
Anybody who really uses RDA on CCA data sources?
My main concern is:
- Best practice to make it RDA enabled (data sources them selves or a 'copy' of these data sources?)
- What about the planning of the loads (every few minutes -> do the loads interfere with each other?)
I have the knowledge from a theoretical point of view, but I would like to exchange thoughts on the best practices.
Thanks!
Kind regards,
Bart
I currently have an hourly CCA9 chain with the overlap setting set to X - this means that we have to use a DSO to reolve the duplicates because the safety is set to timestamp low
As mentioned in my posts above it is goign to prove near impossible to do a RDA for CCA9 due to the timsetamp concept and the fact the timestamp originates from the posting to the app server and not database server
My current problem is due to an hourly load i have no time to load aggregates - although I am going to BIA in the near future
SO plan is - daily loads into BIA - then multicube across a remote module to get upto date data
The RDA will have probs due to the timing differences between the app server and the db server (cos its a timestamp extractor)
The safety limit is there for a reason - large CO jobs like assessment and distribution can take some time to process through the app server to the database server
The safety is to ensure that these jobs get fully through the database queue
Now the timestamp is always the time of the app server NOT when it hits the database
You can use the most recent flag in the bwomsettings to go timestamp low in the extractor - and using this means you have to use a DSO to resolve duplicates - this then defeats the whole object of having a RDA
I have spent about 2 years (on and off) on this problem - and am protoyping a different solution
Currently I am using virtual cubes to read the PCA summary tables for real time cost data
I also use CCA9 and extract every hour with mostrecent set on
I am protoyping a solution which works of most recenet switched OFF with a multicube of last nights data + data not yet sent via a virtual cube (ie running a SQL against COVP of last nights timestamp up to NOW ignoring the safety )
Luckily the CCA9 extractors dont; have the concept of change records so no DSO in required
This will work - I just haven;t got around to putting it into Dev
The benefit of this solution and not RDA is that with this you can load the accelerator or your aggregates each night and tune your cubes
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Best show the safety problem with an example
Say you did an assessment run of 1000000 items
The app server "Posted" the transaction at 12:59 and this is the cpudate in the function module that gets passed to the database server to update COBK
Now it make some time for the whole LUWs of that to actually insert into the database
Imagine you had a safety of 10 minutes and you ran a job at 13:00
It wouldn';t pick up the postign because it only picks up postings up to 12:50
Then on the 13:10 update it picks up postings until 13:00 so it should pick up our posting
The low timestamp on this will be 12:50 and the high will be 13:00
Problem is the databse is getting hammered with 1000000 items and the databse interface doesn;t actually stop processign all the inserts until 13:15
Then your 13:20 job kicks in - it picks up data from 13:00 and 13:10
Guess what - your 12:59 record even though on the database it missed
That's why the safetys are there
User | Count |
---|---|
71 | |
26 | |
10 | |
9 | |
7 | |
6 | |
4 | |
4 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.