cancel
Showing results for 
Search instead for 
Did you mean: 

ORA-04031: unable to allocate 4080 bytes of shared memory

Former Member
0 Kudos

Dear All

In the last week I had this problem two times:

u201CORA-04031: unable to allocate 4080 bytes of shared memory (u201Cshared poolu201D , u201Dunknown objectu201D , u201Dsga heap(1,0)u201D u2018 u201Clibrary cacheu201D)u201D

I have a SAP ticket open, but the SAP Support guys ask me to indentify which SQL statement is creating the problem.

I have more that 300 users on my SAP (R/3 4.7 on SLES9 and Oracle 10.2.0.2) an at least 100 have the SE16 authorization.

Iu2019m thinking that a user is performing a selection with thousands of single values.

I would like to change my u201Cshared_pool_sizeu201D value from 1GBytes to 2 GBytes, but I donu2019t think it will solve the problem.

Let say that the problem is a u201Cstrangeu201D selection made by my users through the SE16 on a cluster tableu2026. do you suggest calculating table/index statistics for my cluster tables?

I know that is a workaround, but what kind of problem can I have with a bigger shared_pool_size? Because my DB+Central Instance server has 5 GB of free memory, and Iu2019m thinking if itu2019s safe to set this parameter with a very big value (3 or 4 GBytes).

Could you tell me what do you think about it?

Thank you in advance

Federico Biavati

Accepted Solutions (0)

Answers (2)

Answers (2)

stefan_koehler
Active Contributor
0 Kudos

Hello Federico,

this is a really hard question, because of the memory management is very complex and without seeing the values on your live system it is even more harder ... but let's try it.

>I would like to change my u201Cshared_pool_sizeu201D value from 1GBytes to 2 GBytes, but I donu2019t think it will solve the problem.

But i think so. Maybe it will not solve the problem, but if you have no idea howto analyze this topic i would try this at first to get a solution.

> I have more that 300 users on my SAP (R/3 4.7 on SLES9 and Oracle 10.2.0.2) an at least 100 have the SE16 authorization. Iu2019m thinking that a user is performing a selection with thousands of single values

Maybe .. or it is a very big execution plan or a complex query .. so many cases why this ORA-04031 can happen

You can query the view V$SQL to get more information about each child cursor.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2113.htm#i1417057

>shell> sqlplus "/ as sysdba"

>SQL> SELECT SQL_ID, SHARABLE_MEM, PERSISTENT_MEM, RUNTIME_MEM FROM V$SQL;

You can also use the AWR or the DBA_HIST_SQLSTAT view to get more information about a defined timeframe in the past. Maybe you can also analyze the shared pool (and sub pools / heaps/ etc.) with a system state dump.. but to be honest this will be very hard.

Regards

Stefan

Former Member
0 Kudos

Hi Stefan,

Thank you for the recommendations; this is what I'm doing now:

1) Check what append in my shared memory.

Iu2019m running this script, in order to save in the zsql table all the statements that are using more than 5 Mbytes.

This doesnu2019t solve the problem, but in case of crash it can help me to identify the reason of the problem.

BEGIN

WHILE (1=1) LOOP

INSERT INTO ZSQL SELECT * FROM V$SQL WHERE SHARABLE_MEM >= 5000000 AND NOT EXIST

S (SELECT * FROM ZSQL WHERE V$SQL.SQL_TEXT = ZSQL.SQL_TEXT);

COMMIT;

DBMS_LOCK.SLEEP(60);

END LOOP;

END;

/

2) Limit the selection screen entries.

We found a way to limit the number of entries which can be pasted from the buffer, into a report selection input field.

We are setting it at 1000.

3) Increase the u201Cshared_pool_sizeu201D from 1 to 2 GBytes.

Iu2019ll do it this week, but I have a doubt: what kind of problem may I have if I increase it to a very big value? (for example 3 or 4 GBytes).

Is it suggested or may I have different/new problems?

Thank you for your advice.

Federico

stefan_koehler
Active Contributor
0 Kudos

Hello Federico,

> 1) Check what append in my shared memory.

There is no need to do this. As i already wrote you have this information in the AWR and you can query it by using the view DBA_HIST_SQLSTAT.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_3183.htm#I1023447

> what kind of problem may I have if I increase it to a very big value? (for example 3 or 4 GBytes).

> Is it suggested or may I have different/new problems?

It also depends on th O.S. and the exact oracle version. You maybe will get more sub pools of the shared pool (depends on shared pool / CPU, etc.) You can take a look at metalink note #396940.1 - it describes the algorithm and the meaning of sub pools and how you can locate the pool in which the ORA-04031 occurs.

But normally there should be no problems.

If you are really interested in the memory handling you can get these two great presentations from Julian Dyke.

http://www.juliandyke.com/Presentations/SGAInternals.ppt

http://www.juliandyke.com/Presentations/LibraryCacheInternals.ppt

Regards

Stefan

Former Member
0 Kudos

Hi Stefan,

Iu2019ll check the linked documentations.

Thank you for your help and for your time spent in order to help me!

Federico

markus_doehr2
Active Contributor
0 Kudos

> I know that is a workaround, but what kind of problem can I have with a bigger shared_pool_size? Because my DB+Central Instance server has 5 GB of free memory, and Iu2019m thinking if itu2019s safe to set this parameter with a very big value (3 or 4 GBytes).

If your system is a 32bit system then this won't work.

Markus

Former Member
0 Kudos

Hi Markus,

I didn't keep in mind to say that my O.S. is SLES 9 SP3 x64

Thanks,

Federico