cancel
Showing results for 
Search instead for 
Did you mean: 

Unable to start SAP using SUN Cluster 3.2

Former Member
0 Kudos

Hello everybody,

We try to start SAP ECC6.00 on SUN cluster 3.2, when we start the system manually on the physical node it works fine, but when we try to enable the cluster resources and start the system it comes up for some time and then crashed. The /var/adm/message shows that it try to start the Central Instace and after some time it give up and stop evry thing.

Here are the /var/adm/message log:

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group prjci-rg state on node arles change to RG_PENDING_ONLINE

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci-pv status on node arles change to R_FM_UNKNOWN

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource prjci-pv status msg on node arles change to <Starting>

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci status on node arles change to R_FM_UNKNOWN

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource prjci status msg on node arles change to <Starting>

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_prenet_start> for resource <prjci-pv>, resource group <prjci-rg>, node <arles>, timeout <300> seconds

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_prenet_start> for resource <prjci>, resource group <prjci-rg>, node <arles>, timeout <300> seconds

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_prenet_start> completed successfully for resource <prjci-pv>, resource group <prjci-rg>, node <arles>, time used: 0% of timeout <300 seconds>

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_prenet_start> completed successfully for resource <prjci>, resource group <prjci-rg>, node <arles>, time used: 0% of timeout <300 seconds>

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci-hastp-rs status on node arles change to R_FM_UNKNOWN

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource prjci-hastp-rs status msg on node arles change to <Starting>

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_prenet_start> for resource <prjci-hastp-rs>, resource group <prjci-rg>, node <arles>, timeout <1800> seconds

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hastorageplus_prenet_start> completed successfully for resource <prjci-hastp-rs>, resource group <prjci-rg>, node <arles>, time used: 0% of timeout <1800 seconds>

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci-hastp-rs state on node arles change to R_ONLINE_UNMON

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci-hastp-rs status on node arles change to R_FM_ONLINE

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource prjci-hastp-rs status msg on node arles change to <>

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci-pv state on node arles change to R_STARTING

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci state on node arles change to R_STARTING

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_start> for resource <prjci-pv>, resource group <prjci-rg>, node <arles>, timeout <500> seconds

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_monitor_start> for resource <prjci-hastp-rs>, resource group <prjci-rg>, node <arles>, timeout <90> seconds

Oct 26 09:56:27 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_start> for resource <prjci>, resource group <prjci-rg>, node <arles>, timeout <500> seconds

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci status on node arles change to R_FM_ONLINE

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource prjci status msg on node arles change to <LogicalHostname online.>

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_start> completed successfully for resource <prjci>, resource group <prjci-rg>, node <arles>, time used: 0% of timeout <500 seconds>

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci state on node arles change to R_ONLINE_UNMON

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_monitor_start> for resource <prjci>, resource group <prjci-rg>, node <arles>, timeout <300> seconds

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci-pv status on node arles change to R_FM_ONLINE

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource prjci-pv status msg on node arles change to <LogicalHostname online.>

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_start> completed successfully for resource <prjci-pv>, resource group <prjci-rg>, node <arles>, time used: 0% of timeout <500 seconds>

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci-pv state on node arles change to R_ONLINE_UNMON

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci-rs state on node arles change to R_STARTING

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_monitor_start> for resource <prjci-pv>, resource group <prjci-rg>, node <arles>, timeout <300> seconds

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci-rs status on node arles change to R_FM_UNKNOWN

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource prjci-rs status msg on node arles change to <Starting>

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <sap_ci_svc_start> for resource <prjci-rs>, resource group <prjci-rg>, node <arles>, timeout <600> seconds

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_monitor_start> completed successfully for resource <prjci>, resource group <prjci-rg>, node <arles>, time used: 0% of timeout <300 seconds>

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci state on node arles change to R_ONLINE

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hastorageplus_monitor_start> completed successfully for resource <prjci-hastp-rs>, resource group <prjci-rg>, node <arles>, time used: 0% of timeout <90 seconds>

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci-hastp-rs state on node arles change to R_ONLINE

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_monitor_start> completed successfully for resource <prjci-pv>, resource group <prjci-rg>, node <arles>, time used: 0% of timeout <300 seconds>

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci-pv state on node arles change to R_ONLINE

Oct 26 09:56:28 arles SC[,SUNW.sap_ci_v2,prjci-rg,prjci-rs,sap_ci_svc_start]: [ID 891424 daemon.notice] Starting SAP Central Instance with command /opt/SUNWscsap/sap_ci/bin/sap_ci_startR3 -R prjci-rs -T SUNW.sap_ci_v2 -G prjci-rg.

Oct 26 09:56:28 arles SC[,SUNW.sap_ci_v2,prjci-rg,prjci-rs,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for message server to come up.

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci-rs status on node arles change to R_FM_ONLINE

Oct 26 09:56:28 arles Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource prjci-rs status msg on node arles change to <Database is up.>

Oct 26 09:56:29 arles SC[,SUNW.sap_ci_v2,prjci-rg,prjci-rs,sap_ci_startR3]: [ID 970364 daemon.error] Command /usr/sap/PRJ/SYS/exe/run/cleanipc 00 remove' returned with non-zero exit status 255, HA-SAP will continue to start SAP.

Oct 26 09:56:31 arles SC[,SUNW.sap_ci_v2,prjci-rg,prjci-rs,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for message server to come up.

Oct 26 09:56:40 arles last message repeated 3 times

Oct 26 09:56:43 arles SC[,SUNW.sap_ci_v2,prjci-rg,prjci-rs,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.

Oct 26 09:56:47 arles SC[,SUNW.sap_ci_v2,prjci-rg,prjci-rs,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance to come up.

Oct 26 09:58:47 arles last message repeated 24 times

Oct 26 09:58:53 arles SC[,SUNW.sap_ci_v2,prjci-rg,prjci-rs,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance to come up.

Oct 26 10:05:25 arles last message repeated 78 times

Oct 26 10:05:30 arles SC[,SUNW.sap_ci_v2,prjci-rg,prjci-rs,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance to come up.

Oct 26 10:06:25 arles last message repeated 11 times

Oct 26 10:06:29 arles Cluster.RGM.global.rgmd: [ID 764140 daemon.error] Method <sap_ci_svc_start> on resource <prjci-rs>, resource group <prjci-rg>, node <arles>: Timeout.

Oct 26 10:06:29 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.error] resource prjci-rs state on node arles change to R_START_FAILED

Oct 26 10:06:29 arles Cluster.RGM.global.rgmd: [ID 529407 daemon.error] resource group prjci-rg state on node arles change to RG_PENDING_OFF_START_FAILED

Oct 26 10:06:29 arles Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource prjci-rs status on node arles change to R_FM_FAULTED

Oct 26 10:06:29 arles Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource prjci-rs state on node arles change to R_STOPPING

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

here are they:

**********************************************************************************************************************************************************

dev_disp:

Tue Oct 26 15:18:31 2010

DpModState: change server state from STARTING to ACTIVE

Tue Oct 26 15:28:33 2010

DpSigInt: caught signal 2

DpHalt: shutdown server >prjcis_PRJ_00 < (normal)

DpModState: change server state from ACTIVE to SHUTDOWN

Stop work processes

Tue Oct 26 15:28:35 2010

Stop gateway

Stop icman

Terminate gui connections

wait for end of work processes

wait for end of gateway

waiting for termination of gateway ...

Tue Oct 26 15:28:36 2010

wait for end of icman

waiting for termination of icman ...

Tue Oct 26 15:28:37 2010

waiting for termination of icman ...

Tue Oct 26 15:28:38 2010

waiting for termination of icman ...

Tue Oct 26 15:28:39 2010

waiting for termination of icman ...

Tue Oct 26 15:28:40 2010

waiting for termination of icman ...

Tue Oct 26 15:28:41 2010

waiting for termination of icman ...

Tue Oct 26 15:28:42 2010

waiting for termination of icman ...

Tue Oct 26 15:28:44 2010

DpStartStopMsg: send stop message (myname is >prjcis_PRJ_00 <)

DpStartStopMsg: stop msg sent

Tue Oct 26 15:28:45 2010

DpHalt: sync with message server o.k.

detach from message server

***LOG Q0M=> DpMsDetach, ms_detach () [dpxxdisp.c 12829]

MBUF state OFF

MBUF component DOWN

cleanup EM

cleanup event management

cleanup shared memory/semaphores

removing request queue

***LOG Q05=> DpHalt, DPStop ( 19927) [dpxxdisp.c 11329]

      • shutdown completed - server stopped ***

**********************************************************************************************************************************************************

dev_w0:

Tue Oct 26 15:18:50 2010

A **GENER Trace switched off ***

M

M Tue Oct 26 15:18:51 2010

M SosICreateNewAnchorArray: sos_search_anchor_semantics = 1

M

M Tue Oct 26 15:23:50 2010

M SecAudit(RsauShmInit): WP attached to existing shared memory.

M SecAudit(RsauShmInit): addr of SCSA........... = 0xffffffff78800000

M SecAudit(RsauShmInit): addr of RSAUSHM........ = 0xffffffff78800768

M SecAudit(RsauShmInit): addr of RSAUSLOTINFO... = 0xffffffff788007a0

M SecAudit(RsauShmInit): addr of RSAUSLOTS...... = 0xffffffff788007ac

M SecAudit(check_daily_file): audit file opened /usr/sap/PRJ/DVEBMGS00/log/audit_20101026

M

M Tue Oct 26 15:28:33 2010

M in_ThErrHandle: 1

M ThIErrHandle: new stat of W0 is WP_SHUTDOWN

M ThIErrHandle: I'm during shutdown

M PfStatDisconnect: disconnect statistics

M Entering ThSetStatError

M ThCallDbBreak: use db_sqlbreak

B db_sqlbreak() = 1

M ThIErrHandle: don't try rollback again

M ThShutDownServer: shutdown server

M ThExecShutDown: perform exclusive shutdown actions

M ThCheckComOrRb (event=1, full_commit=1)

M ThCallHooks: call hook >ASTAT-collect commit handling< for event BEFORE_COMMIT

M ThCallHooks: call hook >rsts_before_commit< for event BEFORE_COMMIT

M SosCheckAbapEnv: invalid tid/mode T-1/M255

M ThCheckComOrRb (event=3, full_commit=1)

M ThCallHooks: call hook >ThVBICmRbHook< for event AFTER_COMMIT

M ThVBICmRbHook: called for commit

M ThCallHooks: call hook >dyKeyTableRest< for event AFTER_COMMIT

M ThCallHooks: call hook >ThNoClearPrevErr< for event AFTER_COMMIT

M ThNoClearPrevErr: clear prev no err

M ThCallHooks: call hook >rsts_after_commit< for event AFTER_COMMIT

M SosCheckAbapEnv: invalid tid/mode T-1/M255

M ThCallHooks: call hook >SpoolHandleHook< for event AFTER_COMMIT

M SosCheckAbapEnv: invalid tid/mode T-1/M255

M ThUsrDelEntry (*, *, prjcis_PRJ_00 ) o.k.

M ThICommit3: full commit, set time, keep resources, redispatch

M ThICommit3: commit and keep resources

M ThCheckComOrRb (event=1, full_commit=0)

M ThCallHooks: call hook >ASTAT-collect commit handling< for event BEFORE_COMMIT

M ThCallHooks: call hook >rsts_before_commit< for event BEFORE_COMMIT

M SosCheckAbapEnv: invalid tid/mode T-1/M255

M ThCheckComOrRb (event=3, full_commit=0)

M ThCallHooks: call hook >ThVBICmRbHook< for event AFTER_COMMIT

M ThVBICmRbHook: called for commit

M ThCallHooks: call hook >dyKeyTableRest< for event AFTER_COMMIT

M ThCallHooks: call hook >ThNoClearPrevErr< for event AFTER_COMMIT

M ThNoClearPrevErr: clear prev no err

M ThCallHooks: call hook >rsts_after_commit< for event AFTER_COMMIT

M SosCheckAbapEnv: invalid tid/mode T-1/M255

M ThCallHooks: call hook >SpoolHandleHook< for event AFTER_COMMIT

M SosCheckAbapEnv: invalid tid/mode T-1/M255

M ThAlarm: set alarm to 600 sec

M ThICommit3 o.k.

M ThExecShutDown: ThUsrDelEntry o.k.

M ThExecShutDown: called rsau_log_system_stop

M PfStatIndInit: Initializing Index-Record

M PfWriteIntoFile: copied shared buf (21451 bytes) to local buf

M PfWriteIntoFile: write 21451 bytes into stat-file

M PfWriteIntoFile: writing Index

M min. start :1288095510/875548

M min. end :1288095510/978932

M max. end :1288096070/592442

M wpid :20

M tasktypes :51452

M max. resp :20297445

M max. cpu :5170000

M max.db

M no. of recs :99

M no. of bytes :21451

M PfFileIndUpd: hyperindex needs to be resorted

M PfWriteIntoFile: updated fileindex for file stat with endtime 1288096070 (26:10:2010 12:27:50)

M PfWriteIntoFile: updated fileindex

M PfStatIndInit: Initializing Index-Record

M PfWriteIntoFile: wrote buffer to file

M ThIErrHandle: do not call ThrCoreInfo (no_core_info=0, in_dynp_env=0)

M Entering ThReadDetachMode

M call ThrShutDown (1)...

B Disconnecting from ALL connections:

B Wp Hdl ConName ConId ConState TX PRM RCT TIM MAX OPT Date Time DBHost

B 000 000 R/3 000000000 INACTIVE NO YES NO 000 255 255 20101026 151823 arles

C Disconnecting from connection 0 ...

C Closing user session (con_hdl=0,svchp=0x106d50df8,usrhp=0x106d40f48)

C Detaching from DB Server (con_hdl=0,svchp=0x106d50df8,srvhp=0x106d53dc8)

C Now I'm disconnected from ORACLE

B Disconnected from connection 0

B statistics db_con_commit (com_total=41, com_tx=6)

B statistics db_con_rollback (roll_total=0, roll_tx=0)

M ***LOG Q02=> wp_halt, WPStop (Workproc 0 19947) [dpuxtool.c 269]

Former Member
0 Kudos

Looks like some issue with your message server, can you please paste the dev_ms log file

Former Member
0 Kudos

torcy:prjadm 3% more dev_ms

-


trc file: "dev_ms", trc level: 1, release: "701"

-


[Thr 1] Wed Oct 27 07:43:46 2010

[Thr 1] MsSSetTrcLog: trc logging active, max size = 20971520 bytes

systemid 370 (Solaris on SPARCV9 CPU)

relno 7010

patchlevel 0

patchno 35

intno 20020600

make: multithreaded, Unicode, 64 bit, optimized

pid 22556

[Thr 1] ***LOG Q01=> MsSInit, MSStart (Msg Server 1 22556) [msxxserv_mt. 1853]

[Thr 1] SigISetDefaultAction : default handling for signal 18

[Thr 1] load acl file = /usr/sap/PRJ/SYS/global/ms_acl_info

[Thr 1] MsGetOwnIpAddr: my host addresses are :

[Thr 1] 1 : [10.113.31.16] torcy (HOSTNAME)

[Thr 1] 2 : [127.0.0.1] localhost (LOCALHOST)

[Thr 1] 3 : [10.113.31.17] torcy-bge0-pub (NILIST)

[Thr 1] 4 : [10.113.31.18] torcy-bge1-pub (NILIST)

[Thr 1] 5 : [10.113.31.20] prjdbs (NILIST)

[Thr 1] 6 : [10.113.31.19] prjcis (NILIST)

[Thr 1] 7 : [10.113.32.16] torcy-pv (NILIST)

[Thr 1] 8 : [10.113.32.17] torcy-nxge0-priv (NILIST)

[Thr 1] 9 : [10.113.32.18] torcy-nxge1-priv (NILIST)

[Thr 1] 10 : [10.113.32.20] prjdbs-pv (NILIST)

[Thr 1] 11 : [10.113.32.19] prjcis-pv (NILIST)

[Thr 1] 12 : [172.16.0.130] clusternode2-priv-physical1 (NILIST)

[Thr 1] 13 : [172.16.1.2] clusternode2-priv-physical2 (NILIST)

[Thr 1] 14 : [172.16.4.2] clusternode2-priv (NILIST)

[Thr 1] 15 : [10.0.0.3] 10.0.0.3 (NILIST)

[Thr 1] MsHttpInit: full qualified hostname = torcy.jubailrefining.com

[Thr 1] HTTP logging is switch off

[Thr 1] set HTTP state to LISTEN

[Thr 1] MsHttpOwnDomain: own domain[1] = jubailrefining.com

[Thr 1] ms/icf_info_server : deleted

[Thr 1] *** I listen to port sapmsPRJ (3600) ***

[Thr 1] *** I listen to internal port 3900 (3900) ***

[Thr 1] *** HTTP port 8100 state LISTEN ***

[Thr 1] CUSTOMER KEY: >J1220833714<

[Thr 1] build version=701.2009.03.11

[Thr 1] Wed Oct 27 07:54:01 2010

[Thr 1] MsSExit: received SIGINT (2)

[Thr 1] ***LOG Q02=> MsSHalt, MSStop (Msg Server 22556) [msxxserv_mt. 6030]

markus_doehr2
Active Contributor
0 Kudos

> DpModState: change server state from ACTIVE to SHUTDOWN

> Stop work processes

The instance is stopping, not crashing, someone/something initiated a shutdown.

Markus

Former Member
0 Kudos

Hi Markus,

I think that means that the cluster issue the shudown process. But the question here why the cluster will do so.

when we check the /var/adm/message we see that it start the message server and the disp after that it then issue the message "wait for central instance to come up" after about 10 min the system goes down and we see in dev_ms that signal 2 was issued.

do you have any idea about this.

Thanks you.

markus_doehr2
Active Contributor
0 Kudos

> when we check the /var/adm/message we see that it start the message server and the disp after that it then issue the message "wait for central instance to come up" after about 10 min the system goes down and we see in dev_ms that signal 2 was issued.

>

> do you have any idea about this.

It seems that the cluster does not detect properly, that the system is started.

It's very difficult to give you a hint here without checking a huge amount of things. I suggest you open an OSS call in the SAP competence center (BC-OP-SUN) and let them have a look on the system directly. Trying to find out by posting tens of logs here and configurations is very tedious.

Markus

Former Member
0 Kudos

Is it possible to disable the cluster monitoring and bring the SAP outside of your HA.Lets see the results this way we rule of HA config issue.

markus_doehr2
Active Contributor
0 Kudos

> Is it possible to disable the cluster monitoring and bring the SAP outside of your HA.Lets see the results this way we rule of HA config issue.

Basically that's what he did - he wrote this in the second sentence in his initial post

Markus

Former Member
0 Kudos

Hi Markus, hm! sorry I missed that.

Former Member
0 Kudos

Hi everybody,

one question come to my, we are tring this for testing now. Do we required SUN Cluster License to it in order to work or not.

Former Member
0 Kudos

Who configured the SUN cluster for your systems? When you ordered SUN hardware I think the cluster license should be a part of that, but suggest you check with supplier and your SUN administrator.

Former Member
0 Kudos

I did run it with SIDadm also and get the same result, the issue started when the cluster resources is waitting for the Central Instance to come up, while we were able to login to SAP system but after some time is creashed.

******

arles:prjadm 318% cleanipc 00 remove

Show/Cleanup SAP-IPC-Objects V2.3, 94/01/20

===========================================

Running SAP-Systems (Nr)...:

-


-


Clear IPC-Objects of Sap-System 0 -


-


Number of IPC-Objects...........: 0

Number of removed IPC-Objects...: 0

Summary of all Shared Memory....: 8848.7 MB (may be incomplete when not in superuser mode)

Number of SAP_ES files found:.............: 0

Number of SAP_ES files removed:...........: 0

Former Member
0 Kudos

mean your issue is with your CI as after some its crashing.

Can you please paste the logs from your CI servers under work directory:

dev_w0

dev_disp

Former Member
0 Kudos

I check the command and here is the output, please note that I run this using the root.

[root@arles /]$ /usr/sap/PRJ/SYS/exe/run/cleanipc 00

Show/Cleanup SAP-IPC-Objects V2.3, 94/01/20

===========================================

----


To inhibit destruction of valid SAP-IPC-Objects

during shutdown, this program requires now

special options for removing IPC-Objects.

----


Running SAP-Systems (Nr)...:

-


-


Show IPC-Objects of Sap-System 0 -


-


Number of IPC-Objects...........: 0

Summary of all Shared Memory....: 8848.7 MB (may be incomplete when not in superuser mode)

Number of SAP_ES files found:.............: 0

Number of SAP_ES files removed:...........: 0

Former Member
0 Kudos

You should not execute cleanipc using root you must use <SID>adm

Former Member
0 Kudos

Please investigate why your below command returned with non-zero.

>Command /usr/sap/PRJ/SYS/exe/run/cleanipc 00 remove' returned with non-zero exit status