cancel
Showing results for 
Search instead for 
Did you mean: 

Script Logic: Database records and sent records

Former Member
0 Kudos

Hi.

Is there a way to access/compare both the values of sent records and the corresponding one (old one) in the database using script logic?

For that, I also would first need to know some details about the script logic:

Let's say I send cells using a Excel template and I have script logic defined on the corresponding cube.

Different theories...

1.) The modifed cells are first written to database, the scope is retrieved and the script logic is run, a *commit sends the other "produced" data (generated by *REC records or modified values like ACCOUNT.Myaccount=...) produced by the script to the database.

2.) Script is called "directly", and the modified cells can be thought of as implicit *REC() records that will be send together with the other script-generated data when a *commit is found.

Are there/Can you tell me the expressions with which I can access the old and newly assigned value of a cell?

Thanks for your help!

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

I am using the MS version...

Yes, my question is more related to the access of old values (in database) and new values (just sent from excel templates) of cells after a user clicked "send data" in script logic.

Example. Let's say there is a value of 10 for the "Apple"-Account in the database (other dimension members arbitrary).

A user now uses an Excel Input template to modify this value and puts 77, and then clicks on "send data".

Can I now use script logic to access (and compare) these values?

E.g. I would like to use s.th. (meaning I dont know if this is possible at all...) like

Account.[Apple]

to access the "old" value in the database (which is 10)?

Similarily, I like to use sth like

Account.[#Apple]

to access the value the user just "sent to the script" (which is, after adding up all deltas, "77" )?

The thing is: The scope of the account dimension is (more or less) "all base members" - so I can't find out which value the user actually modified - so I try to find out by this kind of workaround.

Edited by: HPC-Berater on May 4, 2009 1:28 PM

Edited by: HPC-Berater on May 4, 2009 1:32 PM

Edited by: HPC-Berater on May 4, 2009 1:33 PM

Former Member
0 Kudos

To the best of my knowledge, there's no way to differentiate between the two in your script logic.

When you submit data, and there's default logic set up in the application, the sequence of events is

- there's some data already existing in the DB

- you open an input schedule, retrieving those existing values

- you key in numbers to an input schedule (either enter new values or update existing values)

- you send the submit instruction

- if you're using EVDRE, on the client side it's smart enough to determine which values have changed (based on what's in the EVDRE client-side cache), and only those values are submitted to the server

- on the server, it processes those submitted records, comparing the submitted value against what's currently in the database. If the values are different, it posts the delta to the write-back table. (this happens via the send governor, which adds another layer of complexity)

- after those records are posted, the default logic starts processing

- by the time the default logic scopes out its source data region, it has no way for the logic engine to differentiate between the data that existed at the beginning of this sequence, and the updated data. It's all treated as one.

Another issue to consider (if you're wondering why it doesn't work the way you want it to work) is the possibility that other users are also changing the same data, at the same time. The architecture must be able to handle the situation where, between the first step above and the time the logic runs, the underlying data has been changed by some other user.

It "handles" this by being simple-minded about things. When it's time to start a logic task, it doesn't consider how it got into this state, it just takes whatever's currently in the DB and processes based on what's there -- everything that's there.

Former Member
0 Kudos

Thanks for the detailed workflow!

If I understand correctly, this means, first, the modified data from the excel sheet is written to database and if all the new values are there, the script logic is run. The script logic uses the scopes (and all values used are the "updated" ones in the database) and no information about "old" values are accessible by it.

On the other hand, this also means there is no way to find out which Account member(s) a user actually modified (since scope by definition s.th like "all base members") in an excel template and sent (with evdre) using standard script logic, bc. scope for Account is "all base members"? Maybe there are other functions (not evdre) to send data that allows me to find out which cells values a user modified?

Former Member
0 Kudos

The default logic is executed, whenever data is submitted, and the default scoping of that logic is the same (regardless of whether EVDRE or EVSND is used to send the data).

Within the logic itself, you may choose to alter the dimensional scoping, using *XDIM's and other techniques, but if you do nothing at all, the scoping is determined by the intersection of the data region which you submit -- across ALL dimensions in your application, except Account-type (scope is expanded to all base members) and currency-type (scope is focused on LC member only).

As far as I know, there's no way to limit the logic scope to particular accounts. I've worked around this limitation when necessary, by using a different dimension (datasrc is my favorite) to segregate the various accounts that users may submit, into specific datasrc's. Then you have some flexibility to manipulate the logic flow, based on datasrc. But in default logic it's very challenging to do this -- much much easier if the logic is run in batch mode, where you can control scoping much more carefully by using *SELECT's and validate logic at run-time.

On the app server, check the webfolders\AppSet\Application\PrivatePublications\YourUserID\TempFiles\ folder -- I think the filename is DebugLogic.log, but I'm not sure. This is the log file created by the default logic after you submit data, and is a good source for learning exactly what's going on. (All the other files with 4-digit numbers in the filenames are created by batch-mode packages, not default logic.) The default logic log gets over-written each time this user submits data and the logic runs.

Former Member
0 Kudos

Thanks again. My only workaround so far is the one using another dimension (like your "datasrc") that I use to tell the script logic what Account a user is modifying (then I use a mapping like INPUT_Apples "tells" the script logic that Apples Account is modified and triggers further processing, and INPUT_Tires tells script logic that the Tires Account is modified and triggers another processing...).

Where can I post suggestions for changes for the next BPC version (or service pack) ? From my point of view, treating Account's scope differently than others creates the need for all this arteficial workaround, and it would be nice if Account's scope would just be like the other dimension's scope (just use modified members as scope, not all base members. ), if that change is somehow possible...

Former Member
0 Kudos

Would there it be a way using Visual Basic in the input schedule? As you mention, the "data send" only sends the values indentified as "changed". Is there any way of using those values in the schedule right before the "data send"? Maybe we could then write the accounts of those records down in a file and run a logic on it (just a thought).

About the automatic inclusion, I entirely agree with HPC. I guess that that behavior may be handy if in the logics we often need to include all or many members from account dimension. If that's the case, then it's ok to have all members automatically added to the scope. But then it would be great to have some parameter to deactivate that behavior. I have tried SCOPE_BY, but for some reason it does not work with account dimension, even explicitly saying "I don't want all account members included in the scope" it does include all the members when sending data with an input schedule.

Rafa

(PS: Tim, thank you for your explanation, perfectly clear!)

Edited by: Rafael Moreno on May 5, 2009 1:31 PM

Former Member
0 Kudos

I certainly sympathize with you both... I guess over the years, I've learned to live with (or work around) the script logic imperfections. At a high level, I'll say that the initial glitz & allure of default logic has worn off, and I now try to push as much as possible into batch logic, where it's much easier to control flow & write things more efficiently.

For an enhancement request, the correct path is to file a message with SAP Support.

As Rafael says, if this change in account dim scoping were made in the logic engine, it would need to be a new option, and not a complete change. There are thousands and thousands of script logic files out there in use today, that would most likely break in awkward and unpredictable ways (and require re-writing) if all of a sudden, in a new service pack release, the account dim scoping changed.

But don't hold your breath -- I haven't done a lot of research on this, nor am I a developer -- but from what I recall a couple years ago, there are some architectural considerations within MS SQL Analysis Services that cause the account dimension to be handled differently from other dimensions, and for the most part those are appropriate considerations.

I think you're more likely to see SAP's development effort focused on the BPC Netweaver platform, particularly for these architectural-type enhancements.

Former Member
0 Kudos

I don't think I completely understand your question, but hopefully this helps...

On BPC for MS, the underlying OLAP data storage architecture on SQL Server doesn't provide any capability to track "before and after" data, in any simple way.

If you think of a logic run as a transaction, you're essentially asking to see the pre-transaction values separated from the transaction results. While that's a perfectly sensible thing to ask for, the OLAP architecture doesn't provide it, since OLAP isn't really designed for transactional types of processing.

Once the input schedule is submitted, and/or the logic is completed, the results are posted to the write-back partition of the fact table. If you happen to have an empty write-back table (for example, you run a lite optimize) before either the submit or calc event, then you will see these results segregated from the original values -- but only if you're looking behind the SQL-scenes, and you have a good grasp of the way BPC handles updates to existing values and the treatment of signeddata.

End users don't see these distinctions in a report, since OLAP (and EVDRE when it bypasses OLAP) aggregates all the numbers automatically, across all the fact tables / partitions, and automagically figures out the data sign to display on screen.

The next time an optimize is run (particularly with a compress option), the values are all merged into a single record, so the pre/post distinction is lost. If you're the system admin, perhpas you can control this. But in a production environment where optimizations must be run for typical performance reasons, it's very unreliable to use this as an approach to achieve some functional aim. Plus, it probably won't work (even on a function level) the 2nd or 3rd time you run the calculation, since the calc results of the first calc run will get merged in with the original numbers, when you optimize before the 2nd calc run.

One possibility is to use a separate dimension (such as datasrc) to store calc results separately from the source data. But depending on the nature of calculation, this still may not get you what you're really after. You need to consider what happens when the logic is run for the 2nd, 3rd, Nth times, both in cases where the source data does change, and when it doesn't change. E.g., do you include the calc results in the scope of your source data?

Former Member
0 Kudos

Hi Tim,

if I have not misunderstood HPC, the issue would not be as much a transactional way but just at data send time. It would not be about being able to recover at all times the values of the records (keeping the history of all the records, as in a transactional system), but just when a data send with BPC Excel occurs.

That is, if a record corresponding to a certain account "A" has a value of 13 (i.e.) when I open an input schedule, and I enter 20 and "Send Data", is there any way in the default logic to read and use the value 13? (no matter whether what is being stored in the fact table is a 7 or we have the 13 in 1 or several records).

Regards,

Rafa

PS: sorry if I have misunderstood HPC. I would open a new thread for that.

Former Member
0 Kudos

Hi,

if you manage to find out the answer to both your questions (how does it work and how to get the old values) please share as I am also VERY interested in having that feature.

Regards,

Rafael

Former Member
0 Kudos

Hi,

i have one thought..if u r using BPC7.0NW, then using BADI, keep the old values in one internal table , and new values which wll be written to the cube, in another internal table.

so tat u ll compare the both..

thnks.