Skip to Content

Data Modeling - BW InfoProviders


Do we still need BW InfoCubes with BW on HANA?

With the simplification efforts that have been put into BWoH traditional InfoCubes are not needed anymore. In fact, with BW 7.5 the only InfoProvider type for persistency modeling is the Advanced DSO (ADSO). Have a look at the following link that routes you to the landing page for BWoH First Guidance papers where you will also find one specifically covering the ADSO.

What is a HANA optimized InfoProvider: Type InfoCube?

The structure of an SAP HANA-optimized InfoCube is flatter than the structure of a standard InfoCube. Dimension tables do not exist anymore, with the exception of the technical dimension that includes the request ID, package ID, and record number. There is also no 'E fact table' anymore.

Can we still add or delete fields from the InfoCube after the conversion to HANA-optimized InfoCubes?

Yes, and with even less effort.  Before moving to BW on HANA, the process of structural changes plus the related data realignment and rebuilding of BWA indexes could take hours. With BW on HANA this was reduced to less than one minute, because it is now just a drop or add column command since the data is stored in columnar tables.

Will reports run on Hana optimized DSO's as fast as on HANA optimized InfoCubes in BW on HANA?

Report runtimes for HANA optimized DSO's and InfoCubes are comparable. Prerequisite for good performance on HANA optimized DSO's is to switch on the SID generation.

How is it determined which HANA optimized InfoProviders are loaded into memory and which will not be loaded?

In BW on HANA all data is stored in memory, whereas with BWA only selected InfoProviders have been loaded into memory. In the future, functionality will be provided to manage hot/cold data scenarios. Data life cycle scenarios like Near-Line Storage (NLS) are available since BW 7.x.

Is a Hybrid InfoProvider still necessary?

The Hybrid InfoProvider is a mix of RDA (Real-time Data Acquisition) data and traditional stored data. That doesn’t change with BW on HANA.

What is the difference between a Virtual and a Transient provider?

A Transient InfoProvider is like a Virtual InfoProvider, simply the meta data is transient rather than stale. That means that, yes, from a READ perspective it behaves like any InfoProvider. But from a WRITE perspective there is no information, i.e. it cannot be a data target. So you cannot write data into the transient InfoProvider; you can only write data directly into the tables that are accessed by the transient InfoProvider.

The biggest advantage of a transient provider is that the metadata in BW is not persisted, but always generated at runtime, i.e. if the source metadata is changed the Transient Provider is adapted automatically. The Transient Provider is therefore especially helpful in ad-hoc and/or frequent changing scenarios.

Transient InfoObject in the Transient Provider can reference to a “real” InfoObject and thus inherit its meta- and master data (like description, texts, display properties, display attributes and hierarchies). I.e. you can create a BEx Query on pure HANA data and model, but use a BW hierarchy and the BW hierarchy processing

Can InfoProviders created by Semantic Partitioned Object (SPO) be converted to HANA optimized InfoProviders/SPO?

Yes. This is available since BW 7.30 SP8.

Can InfoSet queries in HANA take advantage of in-memory capabilities?

The recommendation is to use CompositeProviders instead of InfoSets. In the current version of BW on HANA there is no special optimization for processing InfoSets. So the JOIN -statement and SID-process remains the same, i.e. all data need to be loaded to the application layer first to be processed there.

Exception: If temporal joins are required where the result set is depending on time-dependent master data, InfoSets have to be used since CompositeProviders can't provide that functionality yet.

Is there an optimized way for query pruning with BW on HANA on specific InfoProviders?

Yes, there are several different approaches to configure query pruning. The linked HowTo Guide describes all details.

What is a HANA optimized InfoProvider: Type DSO?

With the conversion of a standard DSO to an optimized one, several things change. Most importantly, the activation of the DSO is executed in the database layer only. No round trips to the application server are needed any more. In fact, the application layer is not involved any more at all in the activation process, other than triggering the process and managing the update of request, log, and control tables. With an SAP HANA-optimized DSO, the basic concept of the activation doesn’t change. However, the data is stored differently. Newly uploaded data (i.e., the future image) is stored first in a columnar table that is called the activation queue.

There is now also a newer version of the Standard DSO available that is fully HANA optimized. The good news is that this makes the conversion step of the HANA optimized DSO described above obsolete. Going forward with BW 7.30 SP10 (7.31 SP9 resp.) and HANA revision 57 this will be the only DSO available in addition to the write-optimized and direct-update DSO. Have a look to the links below for more information.

Are Standard DSOs also optimized for HANA?

Yes, please have a look to this blog.

Are there any specific settings or configuration that should be considered when using inventory InfoCubes in a BW on HANA environment?

There is a very helpful First Guidance document available in SCN that addresses this question.

What's the best practice to model InfoProviders to overcome the 2 billion record limitation?

If customers use a scale out BWoH system large tables will be split across all slave nodes. This will tremendously reduce the risk to run into limitations of table sizes. There are a couple of additional features that can be leveraged on the BW side that will partition InfoProvider tables in HANA automatically without the need to configure anything in the HANA database directly. Here is a summary of those features.

  1. Semantic Partitioned Object (logical partitioning)
  2. Scale-out (hash partitioning on primary key)
  3. Time characteristic (BW partitioning by 0calmonth or 0fiscper) - only for DSOs

There is also a blog that talks about very large tables in BWoH and best practices how to best handle them.

The following note describes all best practices around dealing with big data in BWoH.

Where can I find more information about the new Advanced DSO (ADSO)?

Have a look at the following link that routes you to the landing page for BWoH First Guidance papers where you will also find one specifically covering the ADSO.