cancel
Showing results for 
Search instead for 
Did you mean: 

SAP Netweaver 7.0 java on MSCS

odysseas_spyroglou
Participant
0 Kudos

Hello,

We are planning to install SAP Netweaver 7.0 JAVA with SQL Server 2005 in a cluster with two node servers. As I was reading the installation guide that I have downloaded from service.sap.com, I mentioned that Central Instance has to be installed locally (and not on a cluster shared disk) on one node of the cluster and the dialog instance on the other node. The question is - considering the aforementioned architecture u2013 if the node that has the central instance fails then the whole system will fail also. In this case we will not have a fault tolerant cluster resulting on single point of failure!

Is there any other way to have a fault tolerant (clustered) Central Instance (for example is it possible to install it locally to both nodes of the cluster or if dialog instance can substituted by another instance)?

thanks in advanced

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

Hi

Thanks for reply

Can You explain me little bit detail. Still the problem is not solved. And one more thing I installed the central instance number is 01 and dialog instance number is 03.

I am using http://hostname:50100/irj means it will connect to node1. if i give like http://virtualclustername:50100/irj also will connect to node1 only.

but my node 2 is http://hostname:50300/irj.

But how faile over will work.

wat url i need to give to end user i am little bit confuse in this case.Can you give me the solution for this. Still my node2 jcontrol.exe is stopped.

Regards

Rao

Former Member
0 Kudos

Hi

I also installed EP server in MSCS.

In my first node it is working and able to login with http://hostname:50100/irj

but i checked in node 2 the jcontrol is not starting.

if i move all groups to node2 i need to start and stop the MMC and if I start MMC again in node 2 it is starting perfectly.

Again if i move to groups to node 1 again node2 jcontrol is stop.

Why this is happening. Can U give me the solution how to solve this probelm.

thanks

Rao.

Former Member
0 Kudos
In my first node it is working and able to login with http://hostname:50100/irj
but i checked in node 2 the jcontrol is not starting.
if i move all groups to node2 i need to start and stop the MMC and if I start MMC again in node 2 it is starting perfectly.
Again if i move to groups to node 1 again node2 jcontrol is stop.

I did have problems in a MSCS installation before and the symptoms you describe sure sounds similar.

What happend was:

sapinst added share SAPMNT in the SAP Group of the Cluster as it should,

but also added the share SAPLOC into the SAP Group.

Since the Central Instance and the Dialog Instance are installed locally on each node, each node should have their own SAPLOC.

If the SAPLOC is in the cluster group, it will overwrite the local share SAPLOC.

Then, when the SAP cluster group is moved to another node, the local SAPLOC is deleted.

odysseas_spyroglou
Participant
0 Kudos

so Tomas is there a solution?

Is it possible to have SDM on both nodes?

thakns.

Former Member
0 Kudos

I don't think it's possible, I sure haven't seen such a solution before.

Former Member
0 Kudos

The install guide for an MCS based system is a bit misleading. What you are really installing locally are two seperate dialog instances which they are identifying as a CI and a DI. The crucial part is the SCS which contains the message server and that is installed on to a clustered drive (and hence will failover from one node to another as needed).

Does that help?

J. Haynes

odysseas_spyroglou
Participant
0 Kudos

Joe Hi,

the fact that Central Instance (CI) will be installed locally on one node of the cluster doesn't lead to the loss of the fault tolerance of the system? The failure of the server that has the CI doesn't lead the whole system to failure?

Thanks

Former Member
0 Kudos

Actually it won't. Because it is not really a CI (they just call it that). The message server is in the cluster and the two nodes both have dialog instances with just a dispatcher installed on each.

- jph

odysseas_spyroglou
Participant
0 Kudos

Ok, we have just finished successfully the installation but we mentioned this:

In SAP management console on J2EE process table of the first node of the cluster we have

SDM, dispatcher and Server0.

On J2EE process table of the second node of the cluster we only have dispatcher and Server0 (we don't have SDM)

Did we configure something wrong on installation?

Can we install SDM on the second node of cluster without have to reinstall the whole system?

Thanks

Former Member
0 Kudos

That part I can't answer. We noticed the same issue on our install and noted that the Visual Administrator is installed only on one node as well.

I'm hoping that someone from SAP can help answer the question of whether the SDM or the VA can be installed on the second node.

Thanks,

Joe

Former Member
0 Kudos
In SAP management console on J2EE process table of the first node of the cluster we have 
SDM, dispatcher and Server0.
On J2EE process table of the second node of the cluster we only have dispatcher and Server0 (we don't have SDM)
Did we configure something wrong on installation?

No, you did everything right.

SDM is the Deployment Manager and that is one reason why they still call the 1st node CI.

As mentioned before, the major difference between a CI and a DI is the Message Server and the Enqueue Server which have been removed from the CI.

Both the Message Server and the Enqueue Server are Clustered via MSCS and handled from there as own services.

Technically you can call both nodes Application Servers or Dialog Instaces.

Also, they are handled via the SAPMMC and not via the Cluster Manager.

However, there is one more difference (as you noticed).

the SDM is located at the CI.

What will happend if the CI dies and you only run on the DI ?

Well, the system will run with only one node left,

but you can not run SDM or JSPM and thus can not patch the java components.

If you are used to the "old" Cluster solution from SAP, where all was handled from the Cluster Manager, there are som important differences.

Some parts are not handled via the Cluster Manager (CI and DI).

If the CI node reboot itself, you have to start the CI Instance manually via SAPMMC.

The same counts for the DI.

If you want to stop one node, first stop the system via SAPMMC and then stop the rest of the components via Cluster Manager.

When starting the node, start the components in Cluster Manager first and then the rest via SAPMMC.

All MSCS services need to be up and running to be able to start the CI and/or DI via SAPMMC.