cancel
Showing results for 
Search instead for 
Did you mean: 

Start NetWeaver 7.10 with many dynamic WP fails

former_member506576
Discoverer
0 Kudos

Hello,

we have upgraded a system to Netweaver 7.10 and are now testing the possibilities of dynamic Workprocesses, since they are very interesting to us. According to the SAP, a max. number of 600 dynamic Workprocesses is possible.

We can configure up to 240 Workprocesses (rdisp/wp_max_no=240) and start the application server without any problems.

If we configure over 240 dynamic Workprocesses (e.g. rdisp/wp_max_no=241 or more), the dispatcher fails with the following error message:

      • ERROR => EvtCreate: Variable Event Keys exhausted [evtux.c 612]

      • ERROR => DpWpEvtInit: EvtCreate (rc=1) [dpxxwp.c 447]

      • DP_FATAL_ERROR => DpSapEnvInit: DpWpEvtInit

      • DISPATCHER EMERGENCY SHUTDOWN ***

increase tracelevel of WPs

No matter which trace we turn on, there is no more information than this.

Has anyone experienced this type of problem before? What could be the problem?

I cannot find any information about this error message.

Here is a summary of the dev_disp Tracefile:

[...]

Wed Mar 25 09:33:53 2009

kernel runs with dp version 98(ext=114) (@(#) DPLIB-INT-VERSION-098)

length of sys_adm_ext is 376 bytes

      • SWITCH TRC-HIDE on ***

***LOG Q00=> DpSapEnvInit, DPStart (37 786492) [dpxxdisp.c 1228]

shared lib "dw_xml.so" version 139 successfully loaded

shared lib "dw_xtc.so" version 139 successfully loaded

shared lib "dw_stl.so" version 139 successfully loaded

shared lib "dw_gui.so" version 139 successfully loaded

shared lib "dw_mdm.so" version 139 successfully loaded

shared lib "dw_rndrt.so" version 139 successfully loaded

shared lib "dw_abp.so" version 139 successfully loaded

shared lib "dw_sym.so" version 139 successfully loaded

rdisp/softcancel_sequence : -> 0,5,-1

rdisp/dynamic_wp_check : 1

MtxInit: 30000 0 0

DpSysAdmExtInit: ABAP is active

DpSysAdmExtInit: VMC (JAVA VM in WP) is active

DpIPCInit2: write dp-profile-values into sys_adm_ext

DpIPCInit2: start server >va25m0_TBX_37 <

DpShMCreate: sizeof(wp_adm) 350416 (1448)

DpShMCreate: sizeof(tm_adm) 64665856 (25856)

DpShMCreate: sizeof(wp_ca_adm) 56000 (56)

DpShMCreate: sizeof(appc_ca_adm) 123200 (56)

DpCommTableSize: max/headSize/ftSize/tableSize=2000/16/2304048/2304064

DpShMCreate: sizeof(comm_adm) 2304064 (1136)

DpSlockTableSize: max/headSize/ftSize/fiSize/tableSize=512/48/65584/90400/156032

DpShMCreate: sizeof(slock_adm) 156032 (104)

DpFileTableSize: max/headSize/ftSize/tableSize=40200/16/3859248/3859264

DpShMCreate: sizeof(file_adm) 3859264 (80)

DpShMCreate: sizeof(vmc_adm) 374672 (1864)

DpShMCreate: sizeof(wall_adm) (320048/432336/64/104)

DpShMCreate: sizeof(gw_adm) 48

DpShMCreate: sizeof(j2ee_adm) 2016

DpShMCreate: SHM_DP_ADM_KEY (addr: 700000020002000, size: 72653616)

DpShMCreate: allocated sys_adm at 700000020002010

DpShMCreate: allocated wp_adm_list at 700000020003ed0

DpShMCreate: allocated wp_adm at 7000000200040c0

DpShMCreate: allocated tm_adm_list at 7000000200599a0

DpShMCreate: allocated tm_adm at 7000000200599f0

DpShMCreate: allocated wp_ca_adm at 700000023e05300

DpShMCreate: allocated appc_ca_adm at 700000023e12dd0

DpShMCreate: allocated comm_adm at 700000023e30f20

DpShMCreate: allocated slock_adm at 700000024063770

DpShMCreate: allocated file_adm at 700000024089900

DpShMCreate: allocated vmc_adm_list at 700000024437c50

DpShMCreate: allocated vmc_adm at 700000024437d00

DpShMCreate: allocated gw_adm at 7000000244934a0

DpShMCreate: allocated j2ee_adm at 7000000244934e0

DpShMCreate: allocated ca_info at 700000024493cd0

DpShMCreate: allocated wall_adm at 700000024493cf0

DpCommAttachTable: attached comm table (header=700000023e30f20/ft=700000023e30f30)

DpSysAdmIntInit: initialize sys_adm

MBUF state OFF

DpCommInitTable: init table for 2000 entries

DpFileInitTable: init table for 40200 entries

DpSesCreateTable: created session table at 700000060000000 (len=1324896)

DpRqQInit: keep protect_queue / slots_per_queue 0 / 2001 in sys_adm

DpParseQueueSizeCheck: invalid input 1 (0)

DpParseQueueSizeCheck: error parsing >1<

      • ERROR => EvtCreate: Variable Event Keys exhausted [evtux.c 612]

      • ERROR => DpWpEvtInit: EvtCreate (rc=1) [dpxxwp.c 447]

      • DP_FATAL_ERROR => DpSapEnvInit: DpWpEvtInit

      • DISPATCHER EMERGENCY SHUTDOWN ***

increase tracelevel of WPs

[...]

NiWait: sleep (5000ms) ...

NiISelect: timeout 5000ms

NiISelect: maximum fd=1

NiISelect: read-mask is NULL

NiISelect: write-mask is NULL

Wed Mar 25 09:34:08 2009

NiISelect: TIMEOUT occured (5000ms)

DpHalt: shutdown server >va25m0_TBX_37 < (normal)

DpJ2eeDisableRestart

DpHalt: switch off Shared memory profiling

ShmProtect( 57, 3 )

ShmProtect(SHM_PROFILE, SHM_PROT_RW

ShmProtect( 57, 1 )

ShmProtect(SHM_PROFILE, SHM_PROT_RD

DpWakeUpWps: wake up all wp's

      • ERROR => EvtSet: Ill. Event Handle = 0 [evtux.c 1238]

      • ERROR => DpWakeUpWps: EvtSet (rc=2) [dpxxwp.c 1670]

      • ERROR => EvtSet: Ill. Event Handle = 0 [evtux.c 1238]

      • ERROR => DpWakeUpWps: EvtSet (rc=2) [dpxxwp.c 1670]

DpHalt: stop work processes

DpHalt: terminate gui connections

DpHalt: wait for end of work processes

DpHalt: not attached to the message server

DpHalt: cleanup EM

SHM2_EsCleanup: ====================

EmCleanup() -> 0

Es2Cleanup: Cleanup ES2

SemKeyPermission( 65 ) = 0740 (octal)

ShmCreate( 76, 0, 2, 0x11063e460 )

ShmKeyPermission( 76 ) = 0740 (octal)

      • ERROR => EvtClose: Invalid Event Handle [evtux.c 910]

      • ERROR => EvtClose: Invalid Event Handle [evtux.c 910]

DpHalt: cleanup event management

DpHalt: cleanup shared memory/semaphores

SemKeyPermission( 1 ) = 0740 (octal)

SemKeyPermission( 6 ) = 0740 (octal)

SemKeyPermission( 7 ) = 0740 (octal)

SemKeyPermission( 8 ) = 0740 (octal)

[...]

SemKeyPermission( 69 ) = 0740 (octal)

SemKeyPermission( 70 ) = 0740 (octal)

DpHalt: MiCleanup

ShmCleanup( 62 )

ShmCreate( 62, 0, 2, 0xfffffffffffece0 )

ShmKeyPermission( 62 ) = 0740 (octal)

      • ERROR => ShmCleanup(62) failed 3 [mpixx.c 3943]

MpiCleanup() -> MPI_ERROR: General error

DpHalt: removing Semaphore-Management

DpHalt: removing request queue

ShmCleanup( 31 )

ShmCreate( 31, 0, 2, 0xfffffffffffecf0 )

[...]

ShmCleanup( 10 )

ShmCreate( 10, 0, 2, 0xfffffffffffed70 )

ShmProtect( 10, 3 )

ShmCreate( 10, 0, 2, -> 0x700000000000000 )

DpHalt: closing connect handles (dgm + tcp)

***LOG Q05=> DpHalt, DpHalt ( 786492) [dpxxdisp.c 11179]

DpHalt: *** shutdown completed - server stopped ***

DpHalt: Good Bye .....

Thanks a lot,

Silke Brandt

Accepted Solutions (1)

Accepted Solutions (1)

tim_buchholz
Active Participant
0 Kudos

Dear Silke,

as far as I understand the underlying coding, you can not have more than 240 work processes with the current 7.10 kernel. I have contacted the developers responsible to deal with this. It would be best if you open a message on BC-CST-LL stating the problem. It is very likely to be a kernel bug.

Best Regards,

Tim

former_member506576
Discoverer
0 Kudos

Hello Tim,

I opened a message as you suggested and I will report the result as soon as I have it.

Thanks for the information,

Silke

Former Member
0 Kudos

Silke,

240 wp's are quite a lot. How many people are you expecting to work on the system?

André

former_member506576
Discoverer
0 Kudos

Hi André,

how many WP will really use is not sure yet. Right now we are just testing the usage of those dynamic WP, since our main system is not upgraded yet.

On our productive system we have 9 application server with about 600 Users on each server. Additionally we have quite a lot of RFC-load coming in from outside, with tendency rising.

I try to explain a bit of our "problem":

We are a bank, therefore specially days like the first day of a new quarter or a new year is tricky. On those days the nightly processing might run into the morning and the rfc load is very high. On those days it would be very helpful to be able to have more WP available.

I am sure you - as a customer of a bank - would not be very happy if you wouldn't be able to use your online banking on those days, which could happen if not enough WP are available at peak time.

In addition to that, we think about reducing the number of our application servers to not only reduce administration overhead but also to get a better spreading of load on our DB2-Members and Host-Lpars. This is only possible if we are able to configure more than 100 WP per server.

I hope this gave you a little insight.

Greetings,

Silke

tim_buchholz
Active Participant
0 Kudos

Dear Silke,

just to let you know: there is a kernel bug that will prevent you from running more than 240 work

processes. This will be corrected with a kernel patch. Details should be in the message you

filed.

Best Regards,

Tim

Answers (0)