cancel
Showing results for 
Search instead for 
Did you mean: 

i5 power + CPU - CPW question

blaw
Active Participant
0 Kudos

We moved our BW system from an 870 with 2 CPU 21GB memory to i5 power+ with 1 CPU allocated and 21GB memory. (V5R4 on both)

The performance is slower in the i5. When we had our box sized for the upgrade, we were told that 1 CPU on the i5 has the same CPW as 2 on the 870.

Does anyone know if this is correct? Do we still need 2 CPU to get true parallel processing?

Regards,

Brian

Accepted Solutions (1)

Accepted Solutions (1)

blaw
Active Participant
0 Kudos

This wrksyssts running just 1 query for the first time - not in cache. The loads from R/3 will run at 100%.

07/31/07 08:36:39

% CPU used . . . . . . . : 83.8 Auxiliary storage:

% DB capability . . . . : 124.0 System ASP . . . . . . : 4022 G

Elapsed time . . . . . . : 00:00:00 % system ASP used . . : 31.4954

Jobs in system . . . . . : 1751 Total . . . . . . . . : 4022 G

% perm addresses . . . . : .007 Current unprotect used : 15930 M

% temp addresses . . . . : .019 Maximum unprotect . . : 16356 M

System Pool Reserved Max -


DB----- -Non-DB-

Pool Size (M) Size (M) Active Fault Pages Fault Pages

1 831.45 429.46 +++++ .0 .0 158.0 158.0

2 14462.35 1.03 600 .0 .0 .0 .0

3 80.25 .00 70 .0 .0 .0 .0

4 6000.00 .00 125 .0 .0 .0 .0

While running the prepare I was getting

07/26/07 21:32:1

% CPU used . . . . . . . : 6.4 System ASP . . . . . . . : 4022

% DB capability . . . . : .0 % system ASP used . . . : 29.996

Elapsed time . . . . . . : 00:00:18 Total aux stg . . . . . : 4022

Jobs in system . . . . . : 907 Current unprotect used . : 17935

% perm addresses . . . . : .007 Maximum unprotect . . . : 29719

% temp addresses . . . . : .013

Sys Pool Reserved Max -


DB----- Non-DB- Act- Wait- Act-

Pool Size M Size M Act Fault Pages Fault Pages Wait Inel Inel

1 831.45 428.98 +++++ .0 .0 .1 1.7 12.7 .0 .

2 14462.35 1.90 600 .0 .0 .0 3.0 37428 .0 .

3 80.25 .00 70 .0 .0 .0 .0 .0 .0 .

4 6000.00 .00 125 .0 .0 .4 3.0 31.8 .0 .

Regards,

Brian

Former Member
0 Kudos

Hi Brian,

that is interesting ))

where is your performance issue ? In the first or/and the second screenshot ?

... at least this shows, that you didn't implement note 428855 ! That is really necessary for your machine pool ! (and the other things)

Did you activate EVI Stage 2 with SAP_SANITY_CHECK_DB4 and 501572 & 541508 ? Please rerun SAP_SANITY_CHECK_DB4 in order to see if it is still ok ...

Regards

Volker Gueldenpfennig, consolut.gmbh

http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

Answers (4)

Answers (4)

blaw
Active Participant
0 Kudos

Thank you all for the answers

We have several things to look at - I don't think we are setup correctly for the VP,

We don't have DB4_PSA_PARTITIONING active.

We have to adjust or parmeters also.

Thanks,

Brian

Former Member
0 Kudos

With i5 machine, you can make use of "Uncapped Partition" where you can let the performance sensitive partition to use idle CPU power from its neighbouring partitions when it needs additional CPU power (you will see CPU being utilized more than 100% when Uncapped function is in effect). In your case, the BW partition can be assigned 1 processing unit with "2" Virtual Processor so that it can grab some more idle CPU of 1 additional processing unit maximum.

blaw
Active Participant
0 Kudos

I'm not sure what it is at this point - When we were doing data loads into BW from R/3 the CPU was pegged at 100%. But when I was running the prepare to upgrade to BI 7.0 the I/O and active wait in *BASE was 36000+. ( Maybe increasing the max act will fix this currently = 600)

I'm waiting for our Operations people to find out how to check how many "Virtual"CPUs they defined for the partition?

We have 11 CPU active for 12 partitions:

4.0 dedicated to R/3 Production

1.0 R/3 Prod app server

.80 Legacy

.50 CRM dev/qas

.50 BW dev/qas

.40 PLM dev/qas

.40 SCM dev/qas

1.0 CRM prod

1.0 BW prod

.40 PLM prod

.50 SCM prod

.50 R/3 dev/qas

We do have QPRCMLTTSK set to 1 = ON

We are doing the upgrade to BI 7.0 this weekend - So next week, I will be checking all the notes for performance.

Thank you,

Brian

JanStallkamp
Employee
Employee
0 Kudos

Hi Brain.

Just one idea that perhaps might help. You write that you had CPU utilization of 100%. This might indicate that you are using dedicated processors and not sharing them among partitions. If this is the case, have you thought about sharing processors?

IBM has published a document that might be of some interest for you:

<a href="http://www-03.ibm.com/servers/eserver/iseries/perfmgmt/pdf/sapvirti5.pdf">SAP Solutions on IBM eServer iSeries - a virtualization and consolidation study</a>

With kind regards,

Jan

Former Member
0 Kudos

Hi Brian,

what are the WRKSYSSTS figures during the run of the Queries ?

Regards

Volker Gueldenpfennig, consolut.gmbh

http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

0 Kudos

There are many factors that could have an effect here. Without more details about the configuration, it will be hard to tell why the new system does not reach the performance of the old one.

A few thoughts, and may something to look at:

- Are you sure that the slow performance results from a CPU bottleneck? This is the case if the CPU utilization is above 90 % for some time. If not, you may look for other causes, such as high paging, misconfiguration of the SAP installation, or some poor-performing queries.

- Assuming that you are running in a partition on the Power5+ box: How many "Virtual"CPUs did you define for the partition? It should not be higher than the "processing units" (only rounded up to the next whole number, if needed), because otherwise you may loose time slots if you have single-threaded load.

- Is SMT active? Check system value QPRCMLTTSK, which is set based on the hardware settings. SMT allows running 2 threads in one CPU.

If the above hints don't help, a detailed analysis will be required to understand why you do not get the performance that you expect.

Kind regards,

Christian Bartels.