on 10-08-2010 11:29 AM
Hi Experts,
I have to do SAP installation on AIX 6.1 with Oracle DB and HACMP cluster enviornment. According to inst guide, I prepared file system as shown below,
Node-A
df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 0.50 0.30 41% 13410 16% /
/dev/hd2 2.06 0.02 99% 47992 84% /usr
/dev/hd9var 1.00 0.51 50% 9226 8% /var
/dev/hd3 10.00 9.67 4% 1608 1% /tmp
/dev/hd1 0.06 0.06 1% 5 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 0.38 0.16 59% 10198 22% /opt
/dev/livedump 0.25 0.25 1% 10 1% /var/adm/ras/livedump
/dev/export_sapmnt 10.00 10.00 1% 4 1% /export/sapmnt/PRD
/dev/export_trans 20.00 20.00 1% 4 1% /export/usr/sap/trans
/dev/instcdlv 40.00 24.33 40% 6437 1% /instcd
/dev/oraclelv 2.00 2.00 1% 19 1% /oracle
/dev/oraPRD_lv 2.00 2.00 1% 15 1% /oracle/PRD
/dev/oracle_exe 8.00 8.00 1% 4 1% /oracle/PRD/102_64
/dev/MirrlogA 2.00 2.00 1% 4 1% /oracle/PRD/mirrlogA
/dev/MirrlogB 2.00 2.00 1% 4 1% /oracle/PRD/mirrlogB
/dev/oraarch 45.00 44.99 1% 4 1% /oracle/PRD/oraarch
/dev/OriglogA 2.00 2.00 1% 4 1% /oracle/PRD/origlogA
/dev/OriglogB 2.00 2.00 1% 4 1% /oracle/PRD/origlogB
/dev/sapdata1 175.00 174.97 1% 4 1% /oracle/PRD/sapdata1
/dev/sapdata2 175.00 174.97 1% 4 1% /oracle/PRD/sapdata2
/dev/sapdata3 175.00 174.97 1% 4 1% /oracle/PRD/sapdata3
/dev/sapdata4 175.00 174.97 1% 4 1% /oracle/PRD/sapdata4
/dev/sapreorg 10.00 10.00 1% 4 1% /oracle/PRD/sapreorg
/dev/oracle_client 2.00 2.00 1% 4 1% /oracle/client
/dev/oracle_stagelv 8.00 8.00 1% 4 1% /oracle/stage/102_64
/dev/sapbkuplv 100.00 99.98 1% 4 1% /sapbackup
/dev/usrsapprdlv 10.00 10.00 1% 5 1% /usr/sap/PRD
/dev/DVEMlv 8.00 8.00 1% 4 1% /usr/sap/PRD/DVEBMGS01
DASHPROD_SVC:/export/sapmnt/PRD 10.00 10.00 1% 4 1% /sapmnt/PRD
DASHPROD_SVC:/export/usr/sap/trans 20.00 20.00 1% 4 1% /usr/sap/trans
Node-B
df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 0.50 0.31 39% 13363 16% /
/dev/hd2 2.06 0.02 99% 47786 84% /usr
/dev/hd9var 1.00 0.55 46% 9171 7% /var
/dev/hd3 1.00 0.85 16% 871 1% /tmp
/dev/hd1 0.06 0.06 1% 5 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 0.38 0.16 59% 10198 22% /opt
/dev/livedump 0.25 0.25 1% 6 1% /var/adm/ras/livedump
DASHPROD_SVC:/export/sapmnt/PRD 10.00 10.00 1% 4 1% /sapmnt/PRD
DASHPROD_SVC:/export/usr/sap/trans 20.00 20.00 1% 4 1% /usr/sap/trans
I want to confirm waether the prepared file system is correct or not? Or am I missing anything in this.
I am also confused about installation of DI on NodeB, Do I have to install DI directly on NOBEB or on virual host. If we do direct installtion on NodeB, then where it will get installed as we don't have any file system on NodeB apart from /sapmnt/PRD and /usr/sap/trans.
Please advice with your expert suggestions, as I am first time doing this much complex installation.
Regards,
Ramesh.
Hi,
ASCS and database should be on HA. As per your file system you have not created any file system for ASCS instance. Since ASCS is on HA.
For HA, ASCS and DB will be on move able file system if something happens on node1 then it will mount ASCS instance and DB on node2. then your DI will take over the services of your CI instance. You can install DI directly on node2.
Also, you have not created file for DI on node 2.
Thanks
Sunny
Edited by: Sunny Pahuja on Oct 8, 2010 1:10 PM
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
HI
Thanks for reply.
You mean, I have to create the /usr/sap/PRD/ASCS00 for ASCS instance. Ok, I will do the same.
Can you please elaborate what file system I have to create for DI, as my DI will have instance number 02.
Is rest of the activities are OK?
@Sunil,
Yes, I did the same, my virtual hostname is DASHPROD_SVC, and I exported the /sapmnt/PRD and /usr/sap/trans from it. And you can find the same resulf in last two lines of file system.
Please suggest me, if I have to do anything extra than above one. My ASCS instance number will be 00 and CI instance number will be 01 and DI instance will be 02.
Thanks and Regards,
Ramesh.
Hi,
> You mean, I have to create the /usr/sap/PRD/ASCS00 for ASCS instance. Ok, I will do the same.
>
Yes.
> Can you please elaborate what file system I have to create for DI, as my DI will have instance number 02.
>
/usr/sap/PRD/D02
/usr/sap/PRD
Second as Sunil said, you need to decide whether you want to use active-2 cluster or active-passive cluster.
If you want to keep ASCS and DB instance on same node then it is active-passive. If ASCS and db on different node then it is active-active node as the case of sunil But in my opinion, active-2 does not have so much advantage and secondly you will face some issue to synchronize the starting up of SAP system with start of cluster as you need to synchronize different step on different nodes. I will suggest you to go for active-passive and install ASCS and DB on same node.
Thanks
Sunny
Hi All,
Now I am bit confused about mounting /usr/sap/PRD and /usr/sap/PRD/DEVBMGS01, Should I mount it locally or in shared resources.
Rest of the file system I mounted as below -
Reources need to move from NodeA to NodeB
/dev/export_sapmnt 10.00 10.00 1% 4 1% /export/sapmnt/PRD
/dev/export_trans 20.00 20.00 1% 4 1% /export/usr/sap/trans
/dev/instcdlv 40.00 24.33 40% 6437 1% /instcd
/dev/oraclelv 2.00 2.00 1% 19 1% /oracle
/dev/oraPRD_lv 2.00 2.00 1% 15 1% /oracle/PRD
/dev/oracle_exe 8.00 8.00 1% 4 1% /oracle/PRD/102_64
/dev/MirrlogA 2.00 2.00 1% 4 1% /oracle/PRD/mirrlogA
/dev/MirrlogB 2.00 2.00 1% 4 1% /oracle/PRD/mirrlogB
/dev/oraarch 45.00 44.99 1% 4 1% /oracle/PRD/oraarch
/dev/OriglogA 2.00 2.00 1% 4 1% /oracle/PRD/origlogA
/dev/OriglogB 2.00 2.00 1% 4 1% /oracle/PRD/origlogB
/dev/sapdata1 175.00 174.97 1% 4 1% /oracle/PRD/sapdata1
/dev/sapdata2 175.00 174.97 1% 4 1% /oracle/PRD/sapdata2
/dev/sapdata3 175.00 174.97 1% 4 1% /oracle/PRD/sapdata3
/dev/sapdata4 175.00 174.97 1% 4 1% /oracle/PRD/sapdata4
/dev/sapreorg 10.00 10.00 1% 4 1% /oracle/PRD/sapreorg
/dev/oracle_client 2.00 2.00 1% 4 1% /oracle/client
/dev/oracle_stagelv 8.00 8.00 1% 4 1% /oracle/stage/102_64
/dev/sapbkuplv 100.00 99.98 1% 4 1% /sapbackup
/dev/usrsapprdlv 10.00 10.00 1% 5 1% /usr/sap/SCS02
/dev/usrsapprdlv 10.00 10.00 1% 5 1% /usr/sap/PRD/ASCS00
DASHPROD_SVC:/export/sapmnt/PRD 10.00 10.00 1% 4 1% /sapmnt/PRD
DASHPROD_SVC:/export/usr/sap/trans 20.00 20.00 1% 4 1% /usr/sap/trans
Localy mounted filesytem on NodeB for DI
/usr/sap/PRD 8GB
/usr/sap/PRD/D02 8GB
Please suggest me where to mount the /usr/sap/PRD and /usr/sap/PRD/DEVBMGS01 on nodeA, should be local or shared? If I do shared then it will create conflict with /usr/sap/PRD of nodeB.
Please suggest on it.
Thanks and Regards,
Ramesh.
Edited by: ramesh_basis on Oct 9, 2010 2:33 PM
Hi Ramesh,
I think it's been a while you posted this message. I am now in a same situation, Installaing ECC on AIX HACMP with Oracle database. We have decided to go with Active-Passive cluster and based on that we planned the mount points and resourse groups. Since you have gone throguh this excersize already it would be great if you can share the file systems layout on both nodes and also email me the installation document on harideep91740@gmail.com if you can.
Thanks,
Pradeep
you can export the shared filesystems as
<virtual_ASCS_Hostname>:/export/usr/sap/trans and mount on D02 as /usr/sap/trans. Same for /sapmnt//PRD
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
ASCS and database should be on HA. As per your file system you have not created any file system for ASCS instance. Since ASCS is on HA.
For HA, ASCS and DB will be on move able file system if something happens on node1 then it will mount ASCS instance and DB on node2. then your DI will take over the services of your CI instance. You should install directly on node2.
Thanks
Sunny
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
95 | |
11 | |
10 | |
9 | |
9 | |
7 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.