cancel
Showing results for 
Search instead for 
Did you mean: 

System Performance Poor Following Upgrade to Oracle 10g

Former Member
0 Kudos

In prep for an SAP upgrade next year, we've move from:

SAP Kernel 46D running Oracle 8.17 to

SAP Kernel 46D_EXT running Oracle 10.2.0.2 on Solaris SPARC 64bit.

Oracle install performed over weekend. Since then, certain queries and TCODEs are running much longer and performing much worse.

Stats were collected after the Oracle Upgrade. Oracle Dictionary stats were run. Stats have been collected. That seems to make no difference.

There are no short dumps. There are no errors in sm21. There doesn't seem to be anything in the Oracle alert log.

Our current system is:

Kernel 46D_EXT Patch 2225

Oracle Client Lib: OCI_920__OCI_7_API

DBSL ver: 46D.00

ABA Support Package 48 for 4.6C

R/3 Support Package 48 for 4.6C

Basis Support Package 48 for 4.6C

R/3 HR Support Package 19

OS: Solaris 8

ST03 in week before DB upgrade and week after DB upgrade show increases in response time, wait time, load time, db req. time, time per db reqeust and sequential reads.

Our gut feeling is that it's the squential reads which are killing us.

It feels like the CBO is working against us.

what used to take a few seconds to return approx 20 records now takes over 15 minutes.

Please Help!!

Accepted Solutions (1)

Accepted Solutions (1)

richard_howard
Active Participant
0 Kudos

Well, I've just weathered three days of hell. We upgraded production to 10g last Saturday and our business was brought to it's knees by one function module that needed some optimizing. It was a volume related issue so it never came up in our testing, only when full business activity kicked in on Monday.

The issue only happened when all the following conditions were present:

1. We use "for all entries" with the select statement.

2. The table we select from is a small table (less than 1MB)

3. The internal table that we use with "for all entries" has more than hundreds of lines.

Ultimately, our solution was to rewrite the function module but the Oracle expert that was involved said that our code would have probably been ok in 10g if we had our blocking factor parameters set.

max_blocking_factor= 5 <== already set in your system

what needs changing is the following:

max_in_blocking_factor =5

min_blocking_factor=5

min_in_blocking_factor=5

prefer_in_itab_opt =1

The last one is kernel specific so be sure you check these and make them appropriate to your system.

See <b>OSS Note 881083</b> to understand the impact of these values.

The function module was one of those that gets executed by all kinds of interfaces. The inefficiency caused it to search a table hundreds of time but never find a row. This led to TIME_OUT short dumps and pushed the DB CPU Utilization to 100%.

Today is our first day to be able to handle our business load. It's really scarey that one little function module could have such a catastrophic effect but it did.

This is just my initial finding. I'll post more detail after our analysis is complete.

Former Member
0 Kudos

Richard,

We are having a similar problem with a query against the V_EQUI view. What is the function module you changed? I tried setting the five parameters as you mentioned:

max_blocking_factor= 5

max_in_blocking_factor =5

min_blocking_factor=5

min_in_blocking_factor=5

prefer_in_itab_opt =1

, but that didn't help.

Thanks,

Jerry

Answers (11)

Answers (11)

richard_howard
Active Participant
0 Kudos

The Function Module that we changed was custom. It was reading CAWNT, a table of only about 10,000 rows. We did a test of our old code without the blocking factors and then again after adding the blocking factors. It did not resolve the problem but it did improve the performance by almost 40%.

Ultimately, we had to rewrite the function module. The issue was that the FOR ALL ENTRIES code was loading the selected rows into an internal table. The function module had some logic to flush the memory holding that internal table but it wasn't correctly placed at the beginning and the end so what happened was that the internal table wasn't being cleared and it was growing exponentially with every run.

We believe but have not definitively proven that this was not an issue in 9i because it was using a Rule Based route. With RBO eliminated in 10g, the CBO route was taking it through with new results that were just crippling.

We've began a full analysis of some 400 objects that use FOR ALL ENTRIES. For it to really be impacting, it has to be a very high volume object. The more we research, the more we find that with 10g you can have very different results to your custom code. Spotting your troublemakers isn't too hard. You just have to focus on optimizing that code, with an understanding of how it will translate into native SQL using a CBO route to the data.

Former Member
0 Kudos

Tommy,

What was the resolution to your performance issue? We are having the same problem and need some help.

Thanks,

Lana

Former Member
0 Kudos

Tommy et al,

Been watching this thread with avid interest because we have/had the same kind of issue. Massive increase in Sequential Reads, and looked like oracle was using the wrong access paths and ignoring indexes, whilst CBO seemed fine. SAP helped us out eventually after much delving into our system, and after a while we found that our "PGA" parameters were too small despite following the pre-upgrade checklist actions. The pre-upgrade script recommended setting:

WORKAREA_SIZE_POLICY 20.01.07 AUTO

PGA_AGGREGATE_TARGET 20.01.07 25165824

On our 60gb RAM UNIX system

However SAP have since told us to set it to 20% of Total Ram which is 12Gb as opposed to 24Mb! The result was instantaneous and dramatically good. See SAPNote 619876 "Oracle9i: Automatic PGA Memory Management"

Let me know if you need more info, sorry if you've already done this....

Tom Gregory

Former Member
0 Kudos

Dear Tommy

First of all, please check the note of 10.2g recommended parameter,

This notes is updated frequently.

and another check point is Disk I/O performance,

what is size of redolog file ?

If you install the oracle as default, the redolog file size is 50m.. maybe.

It is too small size for production system .

the number of log-switch should be under 10 times / hours.

it make the expensive disk i/o.

Best Regards

JangHyun.

Former Member
0 Kudos

hi Tommy

what was your upgrade path for oracle and solaris 8. Did you upgrade solaris as wel to soalris 10.

I have simlar situatuon like you. I on solaris8, oracle8 and SAP46D and need to move to soalris10 , oracle 10g and SAP46D-EXT.

Can you share your upgrade path, if possible email ID

Regards

former_member235843
Discoverer
0 Kudos

I'm seeing the same issue. We are planning on upgrading to ECC 6.0. in late 2nd quarter 2007. I've upgraded our Dev - Quality Assurance and Sandbox from Oracle 9.2.0.7 to 10.2.0.2. The performance was terrible.

Since than, I've reviewed notes 830576 and 871096. I noticed that I was missing some underscore parameters. I've corrected that and noticed that some improvement to performance. What is very high is the sequential reads. In our development environment before the Oracle 10G upgrade we were averaging 1,000,000 scans per day. After the upgrade, we are up around 4-6 million on a daily basis. I'm looking into this. I'm seeing the same behaviour in our Quality Assurance and Sandbox systems also..

Until I get the sequential reads down than the Production system ugrade will be delayed.

According to Note 871096, SAP is skipping to release Oracle 10G 10.2.0.3. The next release will be 10.2.0.4 around July 2007.

System information:

Kernel 640 Unicode.

R3 Enterprise 470

OS: solaris 9 64bit.

Did anybody make any oracle parameters that are different that what SAP recommends ?

Former Member
0 Kudos

We also had very spotty performance after migrating from 9i to 10g. I found that reviewing and setting the Oracle parameters according to SAP note 830576 is very helpful, if you can handle the maze of referred-to notes and inconsistencies you will find. There are parameter additions you need to make beyond what the SAP Oracle migration guide lists. Adding settings for the following parameters largely solved our problems:

OPTIMIZER_INDEX_CACHING

OPTIMIZER_INDEX_COST_ADJ

PARALLEL_EXECUTION_MESSAGE_SIZE

QUERY_REWRITE_ENABLED

STAR_TRANSFORMATION_ENABLED

EVENT

FILESYSTEMIO_OPTIONS

BTREE_BITMAP_PLANS

FIXCONTROL

INDEXJOIN_ENABLED

INMEMORY_UNDO

OPTIMIZERMJC_ENABLED

OPTIMPEEK_USER_BINDS

SORTELIMINATION_COST_RATIO

We also explicitly deleted all of the table statistics we had created with "analyze table" and ran "brconnect -u / -c -f stats -t all -f collect" to create all new statistics with DBMS_STATS. Prior to this, we reloaded the DBSTATC table according to note 403704. If you have already made the switch to DBMS_STATS prior to migration, you may not need to be concerned with this process.

former_member235843
Discoverer
0 Kudos

I have looked at SAP note 830576 several times. Also the note about Oracle patches. The one parameter that i changed was the db_file_multiblock_read_count parameter. The default was 128. I set it at 8.

The parameters you have listed below, i've them set in our system also. What do you have for your set event parameter ?

in ST04, is your SQL area get ratio % < 1. I found an OSS note on this but it's not available anymore. I looked on Oracle's Metalink website and it's a known problem.

Also, did your full table scans / block reads increase also ?

Thanks for your reply.

Former Member
0 Kudos

I set all of the events listed in the note:

event='10027 trace name context forever, level 1','10028 trace name context forever, level 1','10162 trace name context forever, level 1','10183 trace name context forever, level 1','10191 trace name context forever, level 1','10629 trace name context forever, level 32','38068 trace name context forever, level 100'.

None of them seem to cause any harm.

The SQL area get ratio does show as "0.1". I hadn't paid attention to that until now.

I'm not sure what to say about table scans and block reads since the database load on our test systems is unpredictable. I know they are much less since updating the parameters.

lbreddemann
Active Contributor
0 Kudos

Hi Tommy,

some users face issues like the ones you described after moving to Oracle 10g, because of the changed parameter setup.

Therefore the steps to resolve this issue for you would be:

- make sure you've installed all the patches recommended for SAP systems (<a href="hhttps://websmp207.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=871096&_NLANG=E">#871096 - Oracle Database 10g: Patch sets/patches for 10.2.0</a>)

- make also sure that the db parameters of the recommendation are ALL set (<a href="https://websmp207.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=830576&_NLANG=E">#830576 - Parameter recommendations for Oracle 10g</a>)

- further make sure that the optimizer statistics have been created on all tables (no RBO anymore) (<a href="https://websmp207.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=588668&_NLANG=E">#588668 - FAQ: Database statistics</a>)

If the performance is still bad afterwards, please facilitate SAP support to investigate the reason for this.

Best regards,

Lars

Former Member
0 Kudos

How you set up disks for use in a database depends on the number of disks and the type of hard disk controllers available. If the hard disk controllers support both striping and mirroring, then Oracle recommends that you configure the controllers to support striping.

Striping provides significant performance advantages. All the space from the striped drives appears as a single logical drive. In addition, the space is used by interlacing stripes of space from all of the disks in the stripe. This means that a large file uses some space from the first disk, then some from the second disk, and so on to the last disk, and then starting back at the first disk again. Each file can be spread over all of the striped disks. Multiple CPUs can access data randomly in such a file without contention.

Controllers that support striping usually provide caching as well. This means that data can be written to the controller and cached and saved for a time in storage not on the disk. Data that is read can be cached on the controller in the same manner. Read caching should not be used with Oracle Database, because all database reads are already cached in the System Global Area (SGA). The value of the DB_CACHE_SIZE parameter in the initialization parameter file, init.ora, determines the buffer size that can be used in the SGA. This value also configures Oracle Database on startup.

Former Member
0 Kudos

You have to exam where in Oracle you have a bottleneck.

Did you properly perform every Oracle upgrade specific step, including specific SQL to improve system and dictionary stats?

Follow SAP Oracle notes for tuning for 10g.

Try to use Oracle 10g client, if you kernel could handle it.

andreas_herzog
Active Contributor
0 Kudos

-> how about sapnote n°. 838.725?

possible workarounds:

brconnect -u / -c -f stats -t system_stats

check your init.ora/spfile:

'_optimizer_join_sel_sanity_check' -> TRUE

delete parameter 'db_file_multiblock_read_count' or set it to 8

'_optim_peek_user_binds' -> FALSE

'optimizer_index_cost_adj' -> 50

'optimizer_index_caching' -> 50

'event + "10183 trace name con'

'db_block_buffers' -> 75000

these values were the solution for major performance issues after upgrading from 9.2 to 10.2 on windows 2k3...maybe they work for you as well;-)

GreetZ, AH

Former Member
0 Kudos

Dear Tommy,

Oracle 10g is bigger and more complicated than previous releases of Oracle. The infrastructure and internal machinery of Oracle is more sophisticated than ever before, and with this sophistication comes more resource requirements and overhead.

You need to look in to memory usage,parsing and resource consumed by some new features.

Increase SGA and associated memory structures with advice values.

Collect good stats with better sample size for entire database.

Have a look on CPU usage by sar and look which process is using big memory check is it required or not .

Generate Automatic Workload Repository report.

Oracle 10g imposes more overhead than previous releases of Oracle in many areas ranging from memory usage to hard parses and number of objects in the SYS schema. However, these increased needs are manageable and tolerable.

Regards

Vinod