Skip to Content

Best Practices for Web Intelligence Report Design


The objective of this document is to provide a list of recommendations and Best Practices for developers of Web Intelligence documents. Over the years Web Intelligence has been evolved to a mature product with many capabilities. This has been leading to challenges in developing documents that are performing.

Well performing documents are key for the success of your user adoption, as they expect fast response times. If they are not getting their results in a certain time, they give up/cancel the task they were doing and search for alternatives

The design of Web Intelligence documents should match the behavior of Web Intelligence engines to ensure well performing documents, leading to happy end users. A variety of factors influence the Web Intelligence performance

  1. System Configuration
  2. Document Design
  3. Semantic Layer Configuration and Query Design
  4. Workflows

We hope to provide you with a list of recommendations and best practices via this document in these areas the help you in building well performing Web Intelligence Documents and gain happy end users. Note: The provided recommendations and best practices are not version specific, however it could be that some features mentioned within this document may not be in your version.

The documentation of Best Practices and Tips to improve the performance of Web Intelligence (this page) will be an ongoing project. If you like to receive notifications on updates, you can select the 'Follow' option at the top right of this document.

If there are best practices or tips you would like to see added, please inform us via the comments section at the bottom and we will evaluate them to be included.

The list of recommendations is a consolidated list coming from various parties within SAP, including: Development, Support, Services and Product Experts. Many thanks to all whom have been and will be contributing in this list.

From this list we have created a slide deck as well with equal bullet points. If you like to present this topic to your audience in a presentation mode, you can find the presentation here : Slidedeck - Best Practices for Web Intelligence Report Design

Note: this SCN Document is leading and will be the first one updated with additional information once available. The Slide Deck will be updated periodically after this.

NOTE:  This document is based on the same content as Tips for Optimizing the Performance of Web Intelligence Documents and although some content is overlapping, both documents contain tips and best practices that can help you optimize your Web Intelligence documents.  Whereas this document is a high level overview of general best practices, the above document will provide more detailed tips on how you can optimize your Web Intelligence documents for Performance.

Additional References in SAP Community Network

Besides this Best Practices Document, there are several additional documents available on SAP Community Network that may be useful in improving the Web Intelligence Performance.

Tips for BETTER Web Intelligence Performance

A fast running applet

Everyone hates waiting around for an application to load. Some Java Updates that have been released in the recent months, can wreak havoc on load time of the Web Intelligence Applet (Rich Internet Application). Three issues that arise often in this area are:

  1. Online Certificate Revocation checks causing delays
  2. New JRE security Changes causing issues and delays
  3. Applet comprised of over 60 JAR files, causing many security checks

Online Certificate Revocation Check

New JRE versions (1.7.0_25+) automatically checks for revoked certificates online. For each JAR file within an applet, it will run this revocation check. With the initial releases of BI4.0, the applets consisted out of 60+ JAR files.

Additionally, the internet connection speed plays a major factor. On slow connections and the requirement to check 60+ JAR files on a revocation of it's certificate, the load time can be negatively impacted by several minutes.

Tips to improve the performance os the Online Certification Revocation Checks:

By default, the JAVA Runtime Environment is configured to validate certification revocation via both "Certification Revocation Lists (CRl's)" and  "Online Certificate Status Protocol (OCSP)". To reduce the load time on slow internet connections it is recommended to change the default Java Control Panel settings to only use Certification Revocation Lists (CRl's) or choose the "Do Not Check" option if this is allowed by your corporate security guidelines.

Read the WIKI for more details

See for additional information : KBA1904873

JRE Security Changes

Oracle has been spending a lot of effort in further tightening up the security requirements fro applets. Next to this, New JRE versions are release on a higher frequency compared to the Patch Cycles of SAP BusinessObjects.

A list of known issues and resolutions can be found here: Web Intelligence and Oracle JRE Known Issues

Applet Size

With the introduction of SAP BusinessObjects BI4.0, the Web Intelligence Applet had received a complete new Architecture. In previous version the Web Intelligence Applet existed out of a single file (ThinCadenza.jar). With the new architecture introduced in BI4.0, this had been replaced by individual JAR files (60+) to enable easier development and updating. Due to updates in the JAVA Security, this turned out in a performance degradation when loading the Applet. SAP had decided to revert back to a single file as of BI4.1 SP03 (webiapplet.jar and webiapplet_<language>.jar as accompany recourse file). This results in having only 1 security check and 1 cache check by the JRE.

To benefit from the improved performance of the Web Intelligence Applet, it is recommended to update the BI Deployment to BI4.1 SP03 or higher.

NOTE: The webiappletjar is a 44Mb JAR File. Loading the file for the first time could take some time depending on network performance. An initial load would also be triggered after applying a new Service Pack or Patch.

Additional JRE Tweaks

On top of the earlier mentioned JAVA settings, additional main tweaks can be applied to further improve performance of the Web Intelligence Java Applet (Rich Internet Application)

  1. Ensure JRE caching is enabled
  2. Ensure the JAVA Next-Generation Plug-In is used

a full list of additional tweaks that can be applied can be found here : Tips For Fine Tuning Performance For The Webi Rich Internet Applet (Java Report Panel) - Business Intelligence (Business…

Test that can be ran to find out potential bottlenecks and their location can be found here : Tips For Fine Tuning Performance For The Webi Rich Internet Applet (Java Report Panel) - Business Intelligence (Business…

Steer clear of "Monster" Documents

Large documents can result in a lot of waisted time. If only 10-20% of a large document is utilized, 89-90% is waste. It is strongly recommended to avoid the use of large document.

  1. Avoid using a large number of reports (tabs) within a single document
    1. 10 reports per document is a reasonable number
    2. Do not exceed 20 reports per document
  2. Do not try to account for all scenarios, but rather focus on specific scenarios adding value to the consumer
    1. 50.000 rows of data per document is a reasonable number
    2. Do not exceed 500.000 rows of data per document
  3. Do not add additional Data Providers if not needed or beyond document needs
    1. 5 data providers is a reasonable number
    2. Do not Exceed 15 data providers per document

Creating smaller documents for specific business needs enables a faster runtime and analysis. Start with the individual business need and build a document based on that need. By creating smaller, reusable documents you:

  • Reduce the time it takes to load the document in the viewer
  • Reduce the refresh time of the document
  • Reduce the system recourses needed on both client and server side
  • Improve performance while modifying the document
  • Improve performance for the consumer of the report while running his/her analysis

Utilize Report Linking

Instead of using monster documents, consider using smaller documents and linking them together. Using the feature of OpenDocument, enable you to link various documents with each other and "Jump" from one document to another. Selections of values and results from the report can be shared as Prompt or Filter inputs into the target folder, driving a whole chain of logical documents that are "lean and mean".

For simplification of the use of OpenDocument, a Hyperlink Wizard is available in the HTML interface of the Web Intelligence Designer Model. Using this Hyperlink Wizard, the logic oft he OpenDocument URL can be easily defined via a UI (instead of coding)

Leveraging report linking (OpenDocument), could in some cases lead to a higher design time, however it will enable happier end users due to better performance.

Report Design Best Practices

Limit the number of Data Providers Used

Best Practice from the field is to use not more than 15 Data Providers per document.

  • Data Providers run serially, so runtimes add up
  • Refresh time and merging of dimensions can cause some big delays on the Processing Server side

Using a Data Warehouse to consolidate sources and ETL tools to produce better reporting sources is a better practice.

Retrieve aggregated data instead of aggregation within the document

Best Practice from the field is to retrieve as much as possible pre-aggregated data from the data source meeting the business requirements directly from the consumer. Retrieving detail data while (high level) aggregation is required within de document, is considered a bad practice. Although Web Intelligence is very capable of aggregating data, your data source is stronger in these tasks.

If the required aggregation is not pre-calculated available, leverage aggregation commands in the query definition (see also Semantic Layer Best Practices). This will request the source database to aggregate data before sending this back to the Web Intelligence Processing engines.

Do not (accidentally) disable the cache mechanism of Web Intelligence

Web Intelligence has a great caching mechanism for documents that have already been viewed. Using Cache improves the load time of documents, however there are a few functions within Web Intelligence that will prevent the use of the Cache. These functions are:

  • CurrentTime()
  • CurrentDate()
  • UserName()

The use of these functions will require documents to be regenerated every request, and by this bypassing the benefits of caching.

Avoid Auto-Fit When possible

Although he Auto-Fit option for cells, tables, cross-tabs and charts is a wonderful option to have blocks being nicely adopting the size of the content on the fly, It also forces the document to be calculated during navigation. This will make navigating a report much slower. The maximum impact in slowing down the navigation is when jumping from the last page to the first page within a large report

Avoid Charts with a high number of Data Points

Within BI4.x a new charting engine has been added, the Common Visualization Object Model (CVOM). This engine is hosted within the Adaptive Processing Server (APS) as the Visualization Service.

CVOM enables you to generate compelling charts within the Web Intelligence Document, however is better at creating a large number of smaller charts than creating ones with many data points. It would be more efficient to use smaller, more specific charts than big ones.

Avoid Nested Sections

Nested Sections can contribute to performance degradation. This is especially true if conditions are used such as "Hide Section when following are empty"

Test Query Drill for Drill Down Reports

Within the report properties there is an option to enable the use of query drilling. The function Query Drill will leverage the performance of the underlying database instead of local data. If the Query Drill is not enabled, the query will load more and more data within the document. Once enable, a Drill request will modify the underlying query and fetches new data from the datasource.  By this process the amount of data stored locally for a drill session and could be beneficial for the performance of the document.

It is recommended to validate a report with both options to ensure Query Drill will provide benefits (as this is depending on the underlying database)

Limit use of "Scope of Analysis"

Scope of Analysis can support consumers of the document with an easy Drill Session. However once Scope of Analysis is enabled and defined, extra data will be retrieved from the database and stored in the cube of the document. Loading more data, will have a negative impact on performance.

Instead of using the Scope of Analysis / Drilling, Report Linking could be used as an alternative for on-demand data fetch of details. The advantage with Report Linking is that the detailed report will only fetch the required detail data (versus the whole set using drill)

Tip: Within the BI Launchpad Preferences of Web Intelligence, you can specify if a users is being prompted when drilling needs more data!

Formula Best Practices for Performance

The Web Intelligence Formula Engine has strong capabilities, however is depending on logic. Some statements in formula's will always cause a calculation over the whole data sets, while breaking down calculations into multiple steps (factorizing) will help the calculation engine working faster. Below known impactors for the Calculation Engine are recorded.

  • The Formula statements "ForEach"and "ForAll" should only be used when really necessary. It is recommended to use the "In" statement instead.
  • Where operator can also take longer to process documents behind the scenes. Using "If.. Then.. Else" may be better.
  • Factorizing variables reduces overhead for the calculation engine. for example
    = [MeasureA] + [MeasureB]

    v_SalesEst = [v_Sales] + [MeasureC]


    v_SalesEst = [MeasureA] + [MeasureB] + [MeasureC]

    By breaking down the number of steps, the calculation engine is faster in processing the results

Semantic Layer Best Practices

Building Performing Web Intelligence Documents, has dependencies to it's underlying Universe. Some of the default settings within the Semantic Layer are not optimal for productional usage and a recommended to be changed.

Build Lean Queries for your documents

Once building new documents, it is often seen that a lot of measures and dimensions (including details) are being retrieved within the initial query. During development, more and more objects are added, resulting in mega queries. This is against the basic principles of a Semantic Layer, as it's purpose is to SIMPLIFY the querying of data.

Having many objects in the query to retrieve loads of information, may be useful for the developer, however the end users are paying the price for this.

Building documents should  be done step by step..

  1. Start with the document needs and build your query specific for this need.
    - Do not add dimensions, details or measures that are not needed for the base requirement
    - If it is not known which objects exactly are required, use a sample report for this instead of removing objects later on (if removed at all)
  2. Remove any unused object from the query
    Once the report is completed, validate if any object that is requested by the query actually is being used within the document. If not, remove any object that is not in use.
  3. Remove any unused local variable
    Next to the unused objects, remove any variable that is not used within the document. Even if variables are not used, they will be calculated by the Calculation Engine.

Bottom line; Build queries that only contains objects that are used within the Document

Array Fetch Size Optimization

One of the default settings within the Semantic Layer is the Array Fetch Size (as defined by it's underlying connection). The Array Fetch Size sets the maximum number of rows for each fetch from the database. Tuning this size for your environment can greatly improve the performance.

Internal testing has been learning that increasing the Array Fetch Size (previously default = 10, for IDT set in "optimized mode") improves the overall performance of reports based on this universe.

Within the internal testing we have been retrieving 1.228.000 records into the Report Client and measured the time required to fetch the rows from the database.

In our case, increasing the Array Fetch Size to 1000 (rows per fetch), resulted in a time of 18 seconds to load the data from the database into the reporting engine.

Blindly adopting the results from our internal test is not recommended, as many factors (like network) are impacting your own situation. It is strongly recommended to run several test using various settings to define your optimal Array Fetch Size setting.

With the introduction of BI4.x, the default setting for the Array Fetch Size will be set to "Optimal" to any newly created connection.. This will not always be the best setting, as it will base its result on a limited test. You can disable this settings using the"DISABLE_ARRAY_FETCH_SIZE_OPTIMIZATION" parameter.  For overwriting the optimized values, you must set the parameter to "Yes" in either Universe Design Tool (UDT) or Information Design Tool (IDT).

More details about the parameter can be found in the Information Design Tool Guide

Use Query Filters instead of Report Filters

Often report filters are leveraged to provide the report consumer with a flexibility to quickly change the data presented to a particular set. However the use of report filters, can lead to an increased volume of data that reaches to a point of performance degradation. In case you are fetching a large set of data to be filtered using Report Filters, It is recommended to adjust the reports query and leverage Query Filters instead.

Using a Query Filter will reduce the overall time of the report as it retrieves significantly less data.

Ensure Query Stripping is utilized

Query Stripping is a new feature introduced in BI4.x and helps you to 'remove' unused objects from the query. Using Query Stripping, the query engine will validate if all objects in the query are used by it's document before refreshing. If objects are not used, they will be removed from the query that is send to the database (note, it will not remove the objects from the query panel).

As of BI4.1 SP3, Query Stripping has become available for relation databases as well. Before this, Query Stripping was only available for documents based on the BICS connection (BW).

For Documents using the BICS connection, the Query Stripping is enabled by default, while all other connection types require a manual setting.

For Document on a Relational Database, the following parameters have to be set:

1. Allow Query Stripping within the Business Layer (IDT Universe)

2. Enable Query Stripping within the Query Properties of the Web Intelligence Document

3. Enable Query Stripping within the Docuent Properties of the Web Intelligence Document

note: USE_ENHANCED_QUERY_STRIPPING parameter only optimizes the SELECT and GROUP BY clauses and doesn't modify the joins or other clauses of the SQL statement.

Only Merge Dimensions hat are Needed

By default Web Intelligence BI4.x will create a merged dimension if two data providers contain the exact same object (from the same universe). However, if the final reports working with the data providers do not need to present merged data, it is recommended to unmerge the dimensions.

Merging Dimensions results in a performance impact as it's logic has to be applied within the calculation engine to create the merged dataset. Unmerging unneeded dimensions could improve performance.

Know the Architecture Differences

SAP BusinessObjects BI4.x is different compared to its previous versions. Although it may 'look' technically to be an equal platform, there have been many changes in it's big black box.

Main Change (without any compromise) is the fact that BI4.x is a full 64-bits Server Architecture. The previous versions (XI3.x and earlier) have been cramped into an 32-bits architecture, and by this adding many limitations. The main advantage of using the 64-bits architecture is the ability to claim more than 2Gb of memory for a single process.

Besides this, the way of 'working' in the backend services of the BI platform has been changed. Within the XI3x platform, the Web Intelligence Processing Server would be handling all parts of the request: from generating the SQL statement all the way down to rendering the reports and is charts.. Within BI4.x, these tasks have been splitter to various "common & shared" services.

For Web Intelligence, the following services of the BI Platform may be used (depending on scenario)

  1. Web Intelligence Processing Server
  2. Visualization Service (APS) -> Generating Charts
  3. DLS-Bridge (APS) -> New Semantic Layer and BICS connections
  4. Data Federation Service (APS) -> Multi Source Universes
  5. Connection Server (64-bits) -> 3 Tier mode Connections
  6. Connection Server (32-bits) -> 3 Tier mode Connections
  7. Secure Token Service (APS) -> SSO Tickets sessions
  8. WebI Monitoring Service (APS) -> Client Monitoring
  9. Web Application Server -> Page Rendering
  10. Central Management Service -> Authentication
  11. File Repository Service -> File retrieval / Instance Storage
  12. Publication Service (APS)
  13. .....

It is strongly recommended to understand the technical diagram of the BI4.x Architecture and the BI4x Process Flows

For Web Intelligence, the following Process Flows are recommended to understand as a minimum:

note: these process flows are currently in revision to adopt to the latest changes. Stay tuned for the updated versions to be released!

Stick to a Schedule

Why Use Scheduling

  • Scheduling allows a lower user wait time when implemented correctly
  • Allows you to offset processing on non-peak hours
  • Can help distribute load and reduce sizing requirements for concurrent users
  • reduces impact on database during peak hours

It is a Best Practice to schedule any document that takes over 5 minutes to refresh.

Using the Schedule capabilities of the BI Platform, you can leverage the Web Intelligence Processing Server's Cache mechanism to further speed up the request of the reports. Within the scheduling settings, a tick-box can be enable to pre-cache the report in the Cache of Web Intelligence (inlcuding XLS en PDF formats of the report).

A combination of reports can be build for High Level Reporting (large data sets) and individual detailed reports containing focused dataset requested on demand using Report Linking (see previous sections)

Instead of handling all schedules by IT, it is recommended to provide Scheduling Capabilities to Power Users (including On-Demand if needed)

Server Sizing to Optimal Performance

Any BI4.x Installation out-of-the-box is not sized nor configured for productional use.. With the changes of the BI4.x stack and it's 64-bits architecture, Web Intelligence services can handle a lot more (if properly sized and configured). Thanks to 64-bits, there is no limit anymore in the memory allocation for a Web Intelligence Processing Server. This results in the removal of the (potential) need of additional Web Intelligence Processing Servers on a single machine (to balance memory requirements).

For BI4x it is recommended to start with just a single Web Intelligence Processing Server per machine. Only once you are reaching limits / or prefer to further balance load, it is recommended to add another Web Intelligence Processing Server. However it is not recommended to add above 2 Web Intelligence Processing Servers on a single machine. Due to the new architecture WebI can grab a lot more memory, potentially jeopardizing the stability of the system if multiple Web Intelligence Processing Servers are hosted on a single machine.

With the Changed Architecture, Web Intelligence Processing Server is also spreading various tasks over additional Services:

  • Reports based on a BICS connection leverages the DSL Bridge Service (APS)
  • UNX Relational Universes leverages the DSL Bridge Service (APS)
  • Reports containing Charts are leveraging the Visualization Service (APS)
  • Multi Source Universes are leveraging the Data Federation Service (APS)

Besides Sizing the Web Intelligence Processing Server correctly, it becomes important that you also size and configure the depending services. For further understanding of the Sizing of Web Intelligence and or the Adaptive Processing Server (APS), please consult the BI Sizing micro-page via

Location can make a big difference

Ideally you should place your (Web Intelligence) Processing Tier near your database. The Processing Tier is the level that will request data from your database and potentially pulls a lot of data through the network. If the Processing Tier is on a long distance from the database, various performance issues may occur. Even if your Processing Tier is near the database, a fast network (1Gbps+) is strongly recommended. Any network layer added between the Processing Tier and the Database, will cause an additional step and by this delay. To understand the time required for the Processing Tier and Database, it is recommended to periodically validated the network performance and any potential bottleneck.

Use Local Storage

The Web Intelligence Processing Server can use a lot of cache, this is all stored on disk. Having the Web Intelligence Processing Server Cache on local disks (preferably fast ones i.e. SSD or 10+k SCSI) will improve performance as local storage is typically faster in I/O compared to network storage. If the cache directory is on a network share /  NAS / SAN, it is  recommended to periodically validate the network performance and any potential bottleneck.

CPU does matter

Although sizing for BI4x is executed in SAPS, it is a certain fact that the CPU speed DOES matter in the overall performance of Web Intelligence. This becomes more relevant on older hardware. even if you have 128 CPU's of 1200Mhz, this will generally result in slower reports. Web Intelligence has to run a lot of calculations, the faster the CPU, the faster the engine.

Document History

15 Oct 2014Merlijn EkkelInitial Release