on 08-26-2011 9:25 PM
Dear All,
In our one of java based system file system /usr/sap becomes full.
I need to delete some of old files.
I found some old files like .....heapdump1208338.1313527999.phd in following path
/usr/sap/<SID>/JC00/j2ee/cluster/server0
So can i delete it.
Please suggest
Dear satu,
Hope you are doing good.
Please see sap note: 1589548 for the JAVA server filling up and Note 16513 fo the ABAP end:
1589548 - J2EE engine trace files fills up at a rapid pace
and
16513 - File system is full - what do I do?
However, for the heap dump, please check he reason for it, else you will face other occurences later.
If you face the error again, kindly check the below link to generate the heap dump:
SAP Note No. 1004255- How to create a full HPROF heap dump of J2EE Engine
As I am not sure about your OS, I am mentioning all notes:
AIX: 1259465 How to get a heapdump which can by analyzed with MAT
LNX: 1263258 IBM JDK 1.4.2 x86_64: How to get a proper heapdump
AS400: 1267126 IBM i: How to get a heapdump which can by analyzed
Z/OS: 1336952 DB2-z/OS:Creating a heapdump which can be analyzed
HP-UX 1053604 DK heap dump and heap profiling on HP-UX
There is no side effect of the heap dump parameter; it will however write a heap dump, so make sure that there is enough free space on the server. Even if the free space is less, it will not harm the server in any manner; just the dump written
will not be complete. This will hinder the analysis.
More details are available here:
[http://www.sdn.sap.com/irj/scn/elearn?rid=/library/uuid/f0a5d007-a35f-2a10-da9f-99245623edda&overridelayout=true]
[https://www.sdn.sap.com/irj/sdn/wiki?path=/display/java/javaMemoryAnalysis]
Thank you and have a nice day :).
_____________
Kind Regards,
Hemanth
SAP AGS
Edited by: Hemanth Kumar on Aug 28, 2011 9:24 PM
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
Please look for files named "core" and delete them:
find /usr/sap -name core -exec rm
Another step you can follow
Another helpful command to identify large files is 'find' with the '-size' option. For example, to list all files under /usr/sap/SID of 10 million bytes or larger use
find /usr/sap/SID -type f -size +10000000c -exec ls -ld
Actually there is an core file in /usr/sap/ work dir but it's size showing like 0 byte. can i delete this file,delete it.,Then restart the server.
Then try it again.If u want more details please follow below snote
And also please Check SAP Note 16513
Regards,
K.Ramamoorthy
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Are you getting the heapdump for any analysis?
Usually this dumps are created for some out of memory issue
You could disable it, remove the -Xdump option. Check the note 1053495 about to turn on/off settings.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
yes, you can move your old heap dump if you do not need them for any analysis now.
however, if you are not sure, better zip the heap dump so that it becomes small in size and then move it to a any other file system where you have space for some day. in case you need it, you can recover it, later remove it.
thanks
ashish
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You can use unix tools like du -sk *|sort -rn to check with file is consuming large amount of space.
Just to go /usr/sap/SID and issue above command which will give you good idea which file is consuming large amount space.
Are you keeping dumps or packages uder /usr/sap ?
Please donot delete any files other than old logs.
Thanks
Sukrut
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
94 | |
11 | |
11 | |
10 | |
9 | |
8 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.