cancel
Showing results for 
Search instead for 
Did you mean: 

HANA DB - System refresh takes long time (running to infinite)

Former Member
0 Kudos

Hello colleagues,

My environment is composed by a HANA appliance REV 91 with 256 GB containning two  HANA systems (HDD and HDQ). Inside these systems we have in each one two schemas ( we prefered dont do the installation by multitanant databases) for Netweaver solutions (BW on HANA ABAP+JAVA)

After a crash due insufficient memory resources in system HDD, we changed the memory parameter for database, from 60 GB to 100GB.

However after this, the system HDQ, which still have 100 GB configured for memory, stopped to show all informations contained in Hana administration console. Also, in DB02 these informations didn't appear - the process seems to run in a infinite loop, which ends after two hours.

But for system HDD, everything is ok.

This happens in HANA STUDIO running in my laptop and inside server console also. (both hana studio versions are in revision 91 )

I've opened a OSS call, because there aren't errors in tracelogs.

I would like to know if you passed a situation like this, because I can't wonder what's is going on, because everything seems to be ok.

Just to verify if this can help, I've started a table consistency check through hdbsql. It's still running.

CALL CHECK_TABLE_CONSISTENCY ('CHECK', NULL, NULL)

thank you

Lucas Morrone

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hello Sunil,

I guess I've found a solution in HDQ:

  • I realized hdbstatistics server was down, then I restarted It;
  • in hdbsql I've made a check in schema _SYS_STATISTICS

       

       call CHECK_TABLE_CONSISTENCY ('CHECK','_SYS_STATISTICS',null)

An Exception occurred

          129: transaction rolled back by an internal error: exception 1000002: Allocation failed ; $size$=48; $name$=TableConsistencyCheck; $type$=pool; $inuse_count$=124889726; $allocated_size$=5994739952

But after this I could access all informations through HANA Studio.

Now I changed memory limit for HDQ from 100 GB to 110GB and started again the check table consistency.

But at this moment I think this was the solution.

The next action for these systems is:

Split DB and CI and put CI in a separeted server, apart HANA appliance. This happened due a architeture error and in this week I'm going to solve it. This way, HDD and HDQ are going to have more space just for DB operations, without NW concurrency.

Thank you very much for your interest in help.!

Former Member
0 Kudos

Thank you Lucas, good to know it is working now and the steps taken

You seem to be on Standalone Statistics server(SSS) setup

From SPS07 we have an option of Embedded Statistics Server(ESS) and this will help improve memory usage on HANA

Please consider implementing this in your HANA system

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Lucas,

How did you make a change to the memory allocation on HDD from 60 GB to 100 GB

Did you know how much memory was being utilized by HDQ when this change was being done?

If they have not been restarted since the change, Can you please try a restart of HDD and HDQ instances to make sure the memory allocations are working correctly

NOTE: I am assuming these are not production systems going by the shared HANA appliance setup make sure SAP Application is stopped when the HANA system is restarted

In my opinion the memmory changes did not reflect on the system side and hence results in HANA Studio unresponsive behavior

Former Member
0 Kudos

Hello Sunil.

You're right, these systems aren't productive systems.

What I've done until this moment:

  • I've altered HDD to 100GB through in HANA Studio--> configuration tab-->global.ini-->memory manager -->global_allocation_limit.
  • Restarted HDD and HDQ;
  • I've realized too many memory still allocated while HANA instances was down, so I decided to restart the appliance.
  • I've  checked all alerts, it happened some dumps in HDQ related to out of memory error and the compile server was down, so I restarted whole instance and the services are upand running, however the behaviour still persists. take a look.

Former Member
0 Kudos

Hi Lucas,

HANA System HDQ does not seem to have come up online completely going by the screen shot you attached which shows a YELLOW Triangle on the HDQ state in HANA Studio

It seems that HDQ does not have enough resources to start up

What is the size of the HANA Database on HDQ is it larger than HDD?

Could you please try the following:

1. stop HANA on both HDD and HDQ systems

2. Allocate global_allocation_limit=120 GB for HDQ and try starting it, check if it comes up clean

3. If it does not allocate 130 GB and try.. the objective is to make sure HDQ hs enough resources to start up clean

4. After HDQ is online and working HDD could be allocated with an appropriate global_allocation_limit and started(80 GB assuming HDQ starts with 120 GB)

5. You may also try using some more memory out of the 56 GB you plan to leave for the Linux Operating system, typically we need 10 GB of memory for Linux. Leaving 20 GB for each Linux system you may still be able to use about 16 GB of memory for either HDD or HDQ to avoid the startup errors

Please check and let me know if the steps are useful