cancel
Showing results for 
Search instead for 
Did you mean: 

HANA for OLTP and OLAP

Former Member
0 Kudos

We've been utilizing HANA for the last year in a variety of data mart scenarios, where we use Data Servcies  to load data from ERP and other sources to support analytics and reporting.   We've also utilized HANA as a secondary database to ERP to accelerate some calculations, similar to what SAP is doing with its HANA accelerators.

Due to success we've had with reporting and our customer accelerators, we now have architects from other areas in our IT organization wanting to discuss using HANA for more traditional transaction processing applications.  These applications would need not only excellent performance and reliabilty for transaction processing but we'd also want to see the same excellent performance for analytics/reporting of the data they produce.

Couple questions, I'd like to throw out for discussion/feedback:

  1. Have you utilized HANA for a combined OLTP/OLAP scenario?  If so, can you share an overview and any lessones learned?
  2. How did you model in HANA to support both OLTP and OLAP requirements?    For example, did you model using combination of Row/Column tables?  Did you have data replicated within HANA to support both OLTP and OLAP?

HANA supporters like to tout a virtue of HANA as being able to support both OLTP and OLAP.  One blog I read comparing HANA to Exalytics suggesetd HANA had a  simple 'switch' available to do one or the other and that's was a competitive advantage over Exalytics.  Can't say in my year of working with HANA I've come accross such a switch!!!  If there is a switch, please do enlighten me.

I'm assuming the way we're going to get effective OLTP and OLAP in HANA is through some specific data modeling techniques that are learned through experience and trial/error.

Looking forward to hearing your thoughts.  We're starting our own POC that will address this topic but I hope I can get some insights before we even start.

Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi John,

i'm glad you're enjoying your HANA experience, but i was under the impression that the OLTP/OLAP distinction is no longer valid. are you still storing any subtotals in row tables? if so, is the performance better from having only line items subtotaled on demand?

rgds,

greg

PS i didn't see the switch last time i was in HANA studio, either.

Former Member
0 Kudos

Greg

Every application we've created so far has been been geared to analytics and reporting, so we've used table type 'column store' for every table in our HANA database.  Performance has been great when getting aggregated totals on demand. I'm refering to this type of application as an OLAP app.

What I question is whether column stores tables would work for a high volume transaction processing based application using HANA to do individual record level maintance, i.e. (create, read, update, delete).  I'm refering to this type of application as an OLTP app.

If we find that we need to use row based tables for at least some of the tables in our OLTP app on HANA, I need to know whether we'd get same great analytic peformance in our information models using the row based tables as we currently get with the columnar tables.

I haven't come accross anything that addresses this topic.

former_member184768
Active Contributor
0 Kudos

Hi John,

You are correct, from the perspective of OLAP application which is more read intensive, the columnar architecture of HANA is quite effective. From the OLTP perspective which is more write intensive logical choice would be to use ROW based architecture for the tables.

But then these row architecture tables would not result as better performance as column architecture based.

Having said that, I personally feel that columnar architecture will still be suitable for the OLTP application. Regarding the "WRITE" operations, since these are performed in the DELTA memory, which is anyway row based, it should be optimal.

Also once the data moves to columnar (compressed main memory), the CRUD operations are still possible.

Considering the fact that all the master data is already in memory, theoretically there should not be any issue with lookups and references in the OLTP application.

Congratulations on your successful implementation which resulted in high end user satisfaction. I am sure even the OLTP application will result into similar experience.

Regards,

Ravi

Former Member
0 Kudos

Ravi

Thank you for response.  My gut feeling is in-line with yours but I need to make some design decisions on more than just my gut feeling, which is why we are going to do a POC to test.

I have a hard time thinking that this hasn't already been adressed and tested fairly extensively by someone.  Hopefully one of those individual reads this thread and gives some input based on actual test results and shares their findings.

Regards

John

former_member184768
Active Contributor
0 Kudos

Hi John,

Couple of more points, according to SAP, HANA enables the OLAP application to be more normalized instead of traditional de-normalized schema. This is in-line with the OLTP architecture.

Secondly, the HANA RDS applications are quite transactional in nature and have been reported to be running quite efficient. Although I do not have personal experience on RDS apps. Considering the future plan of HANA to be hosting the ECC database, I am sure the OLTP architecture will be quite suited for HANA.

I am not sure if currently somebody would have tested enterprise OLTP architecture on HANA (apart from RDS) as currently most of the organizations are looking for HANA as their Data Warehousing application with the OLTP application based on traditional transaction databases.

Out of curiosity, what are your ROI parameters to have such a high investment on the transactional application to be based on HANA.

Regards,

Ravi

Former Member
0 Kudos

John,

sorry to have misunderstood you earlier, but i got sidetracked by the OLTP/OLAP nomenclature (which is what drew me to your post in the first place). in any event, i think your transactional scenario calls for row processing as you seem to be quite content with the column storage for your analytical needs.

whatever the number of creates, i think, it would still be lower than the number of reads. the question is how many of the "new" reads make it to your column in order for the efficient analysis once they are included, correct? i think HANA's current solution is the delta merge and this would be area i would look to first.

despite the efforts of making HANA the underlying database for the Business Suite, i think, SAP is actually interested in others' developing applications outside of the traditional ABAP stack and native to HANA. i wasn't able to dig deep enough to make any serious attempts, but to me this seems a natural next step in the product evolution. not sure if this addresses your immediate concern, though, but i would be interested how you made out with your POC.

good luck,

greg