cancel
Showing results for 
Search instead for 
Did you mean: 

CI/DB Server performance on Vmware?

Former Member
0 Kudos

I am looking at Vmware compared to physical hosting for a replacement SAP infrastructure.

Would anyone care to pass comment on the following comment in OSS note: 674851.

"...applications that actually access the database, or communicate or print using the network, are significantly slower " [on a virtualized server, when compared to a physical server]

and (even more frightening)

"For release upgrades and Unicode migrations in a virtual environment, runtimes that are up to five times longer than the runtimes in a non-virtualized environment have occurred."

Are these comments borne out by your experiences??

It seems to me that almost every program in SAP 'actually accesses the database', so that means everything is significantly slower under vmware. I can accept the touted 10-20% slower than the physical hardware; I can't accept 5x slower...

If so, is there any real benefit in virtualising the CI/DB in an ABAP based system?

Or is a better approach to virtualise only non-production environments?

Thanks in advance,

Andy

Accepted Solutions (1)

Accepted Solutions (1)

markus_doehr2
Active Contributor
0 Kudos

> It seems to me that almost every program in SAP 'actually accesses the database', so that means everything is significantly slower under vmware. I can accept the touted 10-20% slower than the physical hardware; I can't accept 5x slower...

I'd say (as a diplomatic answer): it depends.

> If so, is there any real benefit in virtualising the CI/DB in an ABAP based system?

You get a "free high availability"; if one server crashes you can just boot the instance on another one.

> Or is a better approach to virtualise only non-production environments?

I agree that virtualization is a big hype, it has many advantages (apart from the HA point) but it comes with a cost. On small to medium size (where size = I/O load and number of users) systems the users won't even notice that the system runs virtualized on bigger systems with lots of I/O you may have a significant performance penalty. This all depends on your environment and how you use the system and what you do with them. I also have a problem with big systems (> 64 GB RAM and > 1000 users) putting them on VMWare (or other virtualizing environments).

I'd set up a copy of your production system and simulate (e. g. batch jobs) a more or less production load and check the runtimes of the jobs. It's very difficult to give a general advise.

Markus

Former Member
0 Kudos

In general, "it depends" is a correct answer. What is not correct - or better: obsolete - are statements like "virtualization is x times slower when doing this and that". Some of the tests where SAP is concerning about were done on 1) older hardware with missing virtualization features and 2) older ESX releases. But as there is still the chance that customers use old processors together with ESX 3.5 and therefore could run into such performance degradations, the statements remain in the Notes.

To see how hardware with appropriate virtualization features perform on vSphere (ESX 4) take a look at this document:

http://www.vmware.com/resources/techresources/10026

and the results of the 2008 Virtualization Certification Workshop of SAP LinuxLab:

http://wiki.sdn.sap.com/wiki/display/HOME/LinuxVirtualizationCertificationWorkshop

SD Benchmark result:

http://download.sap.com/download.epd?context=B7691794A7D3E12043C201290ABF37F5DED04E103D4E310018D06CB...

So I think there is more benefit than only having some nice features like VMware HA. Increased manageability, availability and consolidation are just a few of them.

Matthias

markus_doehr2
Active Contributor
0 Kudos

> SD Benchmark result:

> http://download.sap.com/download.epd?context=B7691794A7D3E12043C201290ABF37F5DED04E103D4E310018D06CB...

Well - that benchmark was done on a (not yet) released version of MaxDB

We have seen in certain test runs, especially with MaxDB, a partly significant performance issue concerning the fact, that the "significant database time" is the time the database needs to read 1 block of 8 kb. If you usually have an I/O time of 5 - 8 ms per I/O on bare metal and you have with a virtualization layer now 10 - 16 ms, then the I/O part is half as fast and hence the time the system spends on waiting the I/O to be completed is increased with a factor of 2; however, this may or may not be significant for an environment, hence my answer was "it depends".

For a dialog benchmark (as SD benchmark is) this fact isn't taken into account because this benchmark does not do a single report as it is done in production systems. I'd be (generally) very careful with those benchmark/SAPS numbers. MRP runs or other CO-PA related programs were degraded (in our environment) to factors that weren't acceptable (3 - 4 times and more compared to the time they needed on bare metal). This was on ESX 3.5.

I don't want to bash VMWare, my experiences with it (on 3.5) were, well, not really convincing for our environment. Mileage may vary, that was ours.

Markus

Former Member
0 Kudos

Thanks for the input. The 'not completely convinced' situation applies here at the moment.

Having done a small proof of concept, the output is less than convincing. The comparison was between a 7 year old unix server rp7410 (4 cpu, 8 GB, HPUX/oracle) with a brand new Windows 4-way box with 48GB ram (windows/SQL).

The new box is just about keeping up with the old guy, which isn't really what we expected.

I can only presume i/o is the big issue, because the hpux server has 60 spindles on 4 scsi controllers, whereas the windows box has only 1 x4GB FCAL with a mere 5 spindles on the end of it.

That said, even when all the data being read has been cached on the windows server, the cpu time used by a single process is greater than the equivalent cpu time on the hpux server - which implies a 7 year old cpu is more SAP-efficient than the latest and greatest from intel. I am having difficulty believing that at the moment.

Sorry to add another question, but is RDM recommended for the database filesystem(s) rather than VMFS??

I can see mention of such mixed environments, but no comments on the benefits of or performance of them.

Thanks, Andy.

markus_doehr2
Active Contributor
0 Kudos

> Having done a small proof of concept, the output is less than convincing. The comparison was between a 7 year old unix server rp7410 (4 cpu, 8 GB, HPUX/oracle) with a brand new Windows 4-way box with 48GB ram (windows/SQL).

> The new box is just about keeping up with the old guy, which isn't really what we expected.

If you have a chance compare Oracle native vs. Oracle on VM. If you migrated your system from one database to the other and you did not optimize the system (such as using page compression on SQL Server or other tuning) that comparison is not really fair.

> I can only presume i/o is the big issue, because the hpux server has 60 spindles on 4 scsi controllers, whereas the windows box has only 1 x4GB FCAL with a mere 5 spindles on the end of it.

...and maybe a 7 year old tuned Oracle vs. a freshly installed SQL Server

> That said, even when all the data being read has been cached on the windows server, the cpu time used by a single process is greater than the equivalent cpu time on the hpux server - which implies a 7 year old cpu is more SAP-efficient than the latest and greatest from intel. I am having difficulty believing that at the moment.

What memory implementation are you using (es/implementation = flat? view?) on your Windows box on what ESX version?

Markus

Former Member
0 Kudos

thanks Markus.

Absolutely true about Oracle vs SQL server, of course.

There does not seem to be much tuning possible with SQL server, other than getting the I/O paths right.

We've given it lots of memory (16GB of the physical 48GB available), and followed the other SAP recommendations for MSS.

We are running vSphere, and have not set the em implementation to flat (I understood its a problem in our case, from one of the notes I read, can't recall which, I'm sorry). We are running W2k3/SQL 2005.

We are going to rearrange the physical disk arrays to give multiple paths, and add some more HBA's to see if that helps matters.

If there is no apparent benefit we will try on physical tin and see if we get a marked difference. If so then I am afraid Production will probably go physical.

It does appear that straight line CPU delivery is little better than the rp7410, which is very surprising when considering the age of the hpux kit.

Cheers, Andy.

Former Member
0 Kudos

Hello Andy,

according to [SAP Note 1002587|https://service.sap.com/sap/support/notes/1002587], flat memory model is not supported on Windows Server 2003.

To see more details on how the storage behaves (eg. throughput, latency...), you can use 'esxtop' on ESX console. See [VMware KB 1008205|http://kb.vmware.com/kb/1008205] for more details. You can post the results here or open an SAP ticket under component BC-OP-NT-ESX then I can help you from there.

I don't want to throw you to death with documents, but these four could be vital for your setup, especially when you don't get the performance you expect:

[SAP Solutions on VMware vSphere 4 - Best Practice Guidelines|http://www.vmware.com/resources/techresources/10086]

[Performance Best Practices for VMware vSphere 4.0|http://www.vmware.com/resources/techresources/10041]

[Performance Troubleshooting for VMware vSphere 4|http://www.vmware.com/resources/techresources/10066]

[VMware vSphere 4 Performance with Extreme I/O Workloads|http://www.vmware.com/resources/techresources/10054]

Kind regards,

Matthias Schlarb

markus_doehr2
Active Contributor
0 Kudos

> There does not seem to be much tuning possible with SQL server, other than getting the I/O paths right.

I'd try:

- use Windows 2008 SP1

- use SQL Server 2008 SP1 with CU7 (so you have build 2677)

- before loading the database change DDLMSS.TPL to use page compression (not row compression). This will make your database much smaller (about 40 - 60 % ) --> less data must be read

cretab: CREATE TABLE &tab_name&
        ( /{ &fld_name& &fld_desc& /-, /} ) &norowcompression& WITH ( DATA_COMPRESSION = PAGE )

- if you use Windows 2008 set

es/implementation = flat

es/use_mprotect = false (only for the performance test)

> If there is no apparent benefit we will try on physical tin and see if we get a marked difference. If so then I am afraid Production will probably go physical.

Very understandable decision.

Markus

Former Member
0 Kudos

Markus,

> use Windows 2008 SP1

> - use SQL Server 2008 SP1 with CU7 (so you have build 2677)

> - before loading the database change DDLMSS.TPL to use page compression (not row compression). This will make your database much > smaller (about 40 - 60 % ) --> less data must be read

Unfortunately for the moment we are running R/3 4.7, so Windows 2008 as a platform is not possible for us. Once the upgrade & unicode conversion is done, we will indeed go to Windows 2008.

Thanks for your help.

Andy.

Former Member
0 Kudos

As a roundup of this thread, I would like to share my findings.

- we found we could significantly improve the database time by simply rearranging the storage to a more traditional mirror set approach (4 volumes, mirrored, one per SQL server data file, separate mirror set for t-logs)

- we found that the cpu (individual core) performance on the vm was not significantly better on this relatively new AMD based server (dl385), however, of course there are now 12 cores on the server, rather than the 4 on the old hp rp7410.

- overall batch runtimes were significantly better once the io paths were improved (typically 20-30% better runtime, e.g. 1000 seconds down to 700 seconds).

We aim to perform proper loadrunner performance tests prior to go live but the indications for us are that:

- we will get acceptable performance (though it is a smallish system - < 200 concurrent users)

- a reduced infrastructure footprint

- inherent poor-mans 'HA' by implementing a small vm host farm and manually restarting on another host in case of hardware failure

- lower hardware costs

- lower software costs (actually nothing to do with vmware, just moving oracle->sql)

- increased agility to adopt newer server technology as it becomes available.

- improved integration/resource sharing with other applications

So, it seems for us to be a sensible way forward to go with a vmware based solution

Andy.

Answers (0)