cancel
Showing results for 
Search instead for 
Did you mean: 

SAP recommended ninode value for HPUX/Oracle

Former Member
0 Kudos

Hi everybody,

as per the sap note 172747, SAP recommends the Kernel parameter "ninode" value to be 34816 for a SAP/Oracle/HPUX installation.

The current usage of inodes in one of the SAP systems shows over 53% and in future we would like to increase the ninode parameter value. So I would like to the on which other OS kernel parameter values is the value "34816" based. Is there a defined calculation?

Ker.Parameter current used

nfile 65536 19519

nproc 2560 499

npty 60 3

nstrpty 60

nstrtel 60

maxusers 256

maxfiles 1024

ncsize (DNLC)39936

ninode 34816 18817

any suggestions on this would be of great help.

Thaks,

Gangadhar

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

Hi Gangadhar,

If you raise the number of ninodes, which means that you will be raising the number of inodes that can be opened in memory, you do not necessairily need to raise other parameters such as nfile (number of open files), althought they have a direct relation, since for opening a file you need enough space on nfile and ninode.

The parameters recommended on the note usually have a safe value that do not be changed, but setting up a monitoring is always good. 53% of utilization it's a normal state and I would only consider changing this parameter if it starts exceeding the 85% mark, which could be the start of a dangerous situation, but 53% is acceptable.

Cheers,

Maurício

Former Member
0 Kudos

Hi,

The following note may give some insight on this.

Note 1564743 - ENFILE - Too many open files

Parameters "nfile" and "ninode"

From Note 546006 - Problems with Oracle due to operating system errors

Point# 10 in Solution

#File descriptors = #Data files * processes

"Processes" here means the init<sid>. ora parameter processes. Therefore, with 100 processes and 50 data files, for example, you get a minimum value of 5000 for NFILE on HP-UX. For large installations with several hundred data files, there may also be values of around 100,000.

Br,

Venky

Former Member
0 Kudos

Thank you Venkatesh and Mauricio for the replies and sorry for the delayed reply.

@Venkatesh

The SAP system here is an application server only. Aditionally a middleware software "IBM MQseries" is installed on it which is acts as an interface for global data transfer. So i wanted to know if there is any defined calculation behind the sap note recommened ninode value.

@Mauricio

To avoid recovery problems i think it is safe to have the ninode usage less than 50%. Especially when many disks are used and parallell restores are started.

I found some formulae in google search for ninode.

((NPROC16MAXUSERS)32(2*NPTY))

(8*NPROC+2048)

Nproc80(13*maxusers)

which means that ninode has dependecies on nproc, maxusers an npty

Below are the currently configured parameters.

#kmtune -q nproc

Parameter Current Dyn Planned Module Version

===============================================================================

nproc 2560 - (10*MAXUSERS)

#kmtune -q ninode

Parameter Current Dyn Planned Module Version

===============================================================================

ninode 34816 - 34816

#kmtune -q maxusers

Parameter Current Dyn Planned Module Version

===============================================================================

maxusers 256 - 256

#kmtune -q npty

Parameter Current Dyn Planned Module Version

===============================================================================

npty 60 - 60

If the ninode value is increased which would result in ninode usage to be less than 50%, will there be any impact on system performance?

Thanks in adavance

Gangadhar

Former Member
0 Kudos

Hi Ganghadar,

No, it should not have a performance issue. Those structures are held in memory, but the space that they use are so small that you should not even notice the parameter change on your system.

Cheers,

Maurício