cancel
Showing results for 
Search instead for 
Did you mean: 

Data transfer from Disk to Memory too slow

Former Member
0 Kudos

Hi ,

i am wokring on a project for BW on HANA and the past two days we have run some performance test.

We run the Queries from BW (large Data) and we noticed that, when the data are already loaded in memory  the Query result ist faster and thus i have some questions:

  1. Why take so long to transfer the data from diks to memory?
    Query time with preload of data in memory : 10 sec
    Query time with initial data in Disk : more than 600 sec
    (in some other cases the discrepancy was larger)
    Also the statement LOAD <table name> from HANA Studio takes too long.
  2. Are the Data in Disk uncompressed ?
    Because i notice the used storage in Disk is too high (almost 500GB, All of the BW tables are ~250 GB)
  3. When and how will be pushed the unused data from Memory to disk ?
    For example: I have an InfoCube with data for the past 3 years, but in the query i use often only the last year. Would be pushed the data partially away   from momery?
  4. Where can i see the Compression rate of my data?

 

Thank you in advance & regards,

Vjola

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi Vjola,

My thoughts:

1. Why take so long to transfer the data from disk to memory?

- Not sure why this happens. My guess is there was already a lot of data in the memory and HANA would have had to remove some to load more which probably is why it took a bit of time. Remember, only 50% of your total RAM is used to store data. The remaining 50% is needed for temporary objects, intermediate results etc.

To add, with HANA SP5 and BW SP08 tables/partitions can be marked as not-active (to reduce memory occupancy). Within BW, all psa tables and write-optimized DSOs will be marked as not-active by default. This means that such tables will be loaded into memory only when accessed and will be removed with highest priority when there is a shortage of memory space. More on this here: http://www.experiencesaphana.com/servlet/JiveServlet/download/38-7925/SAP%20NetWeaver%20BW%207%203on...

2. Are the Data in Disk uncompressed ?

- Data compression in HANA is due to columnar storage of data (as opposed to row based storage). Hence persistence of such data on the disk should also be lower. However the disk also stores the before and after image versions of the data plus logs etc. Hence disks will take up lot more space than in-memory for the same data set. Have a look at https://cookbook.experiencesaphana.com/deploying-bw-on-hana/preparation/plan-and-purchase-hana-syste... that explains this a bit more.

3. When and how will be pushed the unused data from Memory to disk ?

- If your reports only selected data for a year from the cube, only that data will be loaded into the memory (partial load). However if it all loaded and some of it remain unused, HANA would replace it only when there is a memory shortage I believe.

4. Where can i see the Compression rate of my data?

- Don't think HANA computes the compression rate of the data anywhere. You will have to compute this. If you loaded data from a row bask disk storage, then work out what its storage space was (if that DB used any compression techniques, you will have to uncompress it before noting the space). After you load this data into HANA (and do a delta merge), double click on the table and the 'Runtime information' tab will tell how much space the same data occupies in-memory (size in memory). You can then compute the compression ratio from there.

Another way of doing this is, if your table was empty and you loaded some data into it, prior to doing the delta merge, note the space occupied (delta storage size). Subsequently get the space after the delta merge is run (size in disk) and then work out the compression ratio from there. This is because the delta merge is a row based temporary storage where the data is initially written into before converting it into column based.

Hope that helps.

Thanks,

Anooj

Answers (3)

Answers (3)

lbreddemann
Active Contributor
0 Kudos

Hi Vjola,

Vjola Dule wrote:

  1. Why take so long to transfer the data from diks to memory?
    Query time with preload of data in memory : 10 sec
    Query time with initial data in Disk : more than 600 sec
    (in some other cases the discrepancy was larger)
    Also the statement LOAD <table name> from HANA Studio takes too long.

Typically rebuilding large delta stores will take time.

So, have you checked whether the table is already merged?

Concerning the "data on disk compression": the data will be stored in columnar form on disk, but no additional compression is applied here.

- Lars

Former Member
0 Kudos

Hi Anjo,

for 1 )

How big is Your Table ?

Try to Index the Columns-We have observed a decreased performance while querying unloaded tables.

This might be due to the fact that compression/coding allogorithm on a pre sorted column works faster.

Introduce partitions - Data is on a Table / Column Partition base loaded to InMemory. Make a prtitioned load

Best
Martin

rama_shankar3
Active Contributor
0 Kudos

Vjola:

  What type of table are you using? Columnar or row? this could make a big difference in performance.

  You can look at the compression ratio using the below SQL:

   SELECT *  FROM M_CS_COLUMNS WHERE TABLE_NAME = 'TAB'

Regards,

Rama

Former Member
0 Kudos

Hi Rama,

the tables are column store.

Regards,

Vjola