cancel
Showing results for 
Search instead for 
Did you mean: 

0FI_GL_4 delta scenario,

Former Member
0 Kudos

Hello all,

Could someone please help me with the following scenario,

We were pulling line items data into BI from FI-GL, using 0FI_GL_4. So far we were loading into DSO and then into Cube for speedy reporting.

As maintaining the same records in Cube and DSO not a good practice as it needs 2TB of extra memory for 2yrs of data at any time, we would like to remove DSO totally and load into Cube (line-items)

My questions are:

Does 0FI_GL_4 supports the delta to load into Cube...means, whether this data source's delta updates new value or just the change?

If you say it does not support, Could you tell me the best practices to improve the reporting performance when we do reporting on DSO (of course drill down from summary Cube)? Like splitting, indexing, etc and also please let me know the performance gain compared to reporting on cube.

on the other way around, Could someone tell me, what are the issues in designing and developing generic data source with delta that supports delta update into InfoCube directly.

Hope you will help with these,

Any Help would be greatly appreciated with POINTS.

Thanks in Advance

Accepted Solutions (1)

Accepted Solutions (1)

former_member207028
Contributor
0 Kudos

Hi,

Data Source 0FI_GL_4 is delta enabled, we can have delta functionality without DSO, we can directly load data to Cube.

But for Detailed Reporting you need data in DSO.

data can be deleted from Change log table once all the loaded data is confirmed to be correct.

Regarding Generic datasource, it is better to used SAP define datasource than generic datasource.

we can even create Generic datasource with required fields and enable delta by selecting Delta process.

Hope this helps

Regards

Daya Sagar

Former Member
0 Kudos

Hello Sagar,

Thanks for your response,

are you sure, 0FI_GL_4 is capable of bringing deltas to Cube without DSO, because, when I check in help.sap.com it was said, its not possible.

So could you please elaborate how that is possible?

You said, for detailed reporting, we need to use DSO, what is that supposed to mean?, as I mentioned in my question, we would like to upload document level details into Cube, and Cube is meant to be used for line-item level reporting in the first place. Could you comment on this again.

Thanks once again

Former Member
0 Kudos

I have a few questions before i can answer yours:

- are you using BW 3.x or Netweaver BI?

- what is the distinction in data level/aggregation level between your DSO and your cube?

- is the main problem the amount of storage needed?

Next a few things from the theory:

- a delta mechanism only takes care of delivering new/changed records into a structure (0fi_GL4 in this case) and has nothing to do with the data target (or am i missing something right now).

- you should take a look at your information needs when modelling a cube.

- normally an ODS contains the details and a cube has higher aggregation level.

Former Member
0 Kudos

Read this - and all will become clear

/message/6141585#6141585 [original link is broken]

(2 questions abotu the same thing in the same day!)

Former Member
0 Kudos

HI,

Thanks for your responses,

Well Marc, we have BI7.0 system,

I am aware of DSO functionality, but reason to think to remove it is for improving speediness in reporting.

As you all know, limitation with FI data sources is we can't do delta update to Infocube. So we are looking for best trade off.

Any insights are welcome.

Thanks

Former Member
0 Kudos

A few remarks and a few new questions

- there are limitations to delta and infocubes as Simon explains, that is correct.

Again i ask you the difference between the DSO and the cube? Is it aggregation or is the cube a copy of the DSO with all details and only meant for reporting?

Otherwise i would really consider using the DSO as infoprovider instead of the cube (maybe together with the BIAccelerator).

Another option is building the cube each time again (full load with selection instead of delta), but given your amount of data to process i think that is hardly acceptable.

Marco

Former Member
0 Kudos

Thanks for your inputs,

DSO consists of line-item level details, 1tb equivalent of records for an year. This is the reason why, we thought of replacing the DSO with line-item level Cube, which improves the reporting performance.

as, this scenario does not support delta, we decided to have DSO.

Thank you so much

Former Member
0 Kudos

Hi

In addition to others remarks, I would like to say, that it is not recommended to take the data directly to a cube without an ODS for this datasource, I am not sure about your safety interval at this point(hoping its 1 day rite now

) but im sure users are going to require immediate deltas in the future which makes an ODS staging imperative. As far as reporting goes, you can push this to a cube and start reporting on that which would increase your load time, but still reduce your reporting time. The trick is getting the correct balance in what is needed for you.

Gokul

Former Member
0 Kudos

I am actually thinking of creating my own GL4 datasource which ignores the AEDAT read - as the information I require is at a low granular level and doesn;t include the fields that trigger a AEDAT insert

Then I can use the hypbrid provider concept - ie daemon in minute based extractions into a "daily cube" - then during the night clear the daily cube and load normal GL datasource into the DSO then into a BIA cube.. with a multicube on top of the daily cube and BIA cube

Still to decide how in a 24/7 operation I can clear the daily cube whislt the BIA cube is loaded to stop double counting

Former Member
0 Kudos

Hello Simon,

Your solution is interesting, need to give a thought about it.

ours is a big company, and really can't work on day/night loading, deleting the data from cube...might not work for us I think.

Anyway Good Luck to you,

Former Member
0 Kudos

I am in a worldwide 24/7 as well - the eay I am thinking is loading the data but manipulating the read pointers to not available - and then after the delete job swapping the read point to available (ie it may take about a second to do)

I am trying to get hybrid infoprovider scenarios from SAP as I do beleive they have something in this area which will do it

Answers (0)