Data Volume - Calculation performance
We are experience degrading calculation performance as data volume increases.
We are implementing BPC 7.5 SP05 NW (on BW 7.0. EHP1).
An allocation script that ran in 2 minutes when the database contained only 800,000 records, took over 1 hour after the database was populated with a full year of data.
All logics have been written to calculate on a reduced defined scope but it does not seem to improve the execution time. When I check the formula log, the scope is respected.
The application is not that large either: 12 dimensions, the largest containing 300 members and 3 hierarchical levels.
We optimize the database but to no avail.
What can be done to optimize performance? Are there any technical settings in BPC or BW that can be fine-tuned?