cancel
Showing results for 
Search instead for 
Did you mean: 

Tune redo log size on Oracle for Netweaver 7.01 BI ABAP system.

Former Member
0 Kudos

Hello,

We have a large BI system (2.5 TB) with many BI loads from multiple source system. We have a problem with the database of this system during parallel BI loads. Each day we have around 500GB redo logs generated. with sometimes only 4 minutes between redo logs switches.

Performance Indicator Description Observed Value Reference Value

free buffer waits wait time for free buffer in buffer cache considerable negligible

We already have increased the redo log size and count to 8 groups of files, each 1500MB.

The redo logs filesystems are located in a specific SAN Volume (RAID 1 on 50 FC disks).

The database is located on a RAID 5 SAN volume.

I already plan some actions this sunday:

- Order the sequence of redo logs

- Increase the size of the SGA memory.

I would like to know if it's a good idea to increase the redo size to 2GB?

Until now, archived redo logs are in a logical volume in the same VG that the DB, I think about adding a RAID 1 VG only for them but I don't know if it could help..

I'm thinking about creating 4 SAN disk, 1 VG for each redo log directory.

Have you any advice to help us increasing the DB performance?

Thank you very much.

P-S Baton

SQL> select substr(name,1,30),value from v$sysstat where name LIKE '%redo%';

SUBSTR(NAME,1,30)                VALUE
-------------------------------- ----------
redo synch writes                  32230511
redo synch time                   157941301
redo blocks read for recovery             0
redo entries                     4722721796
redo size                        4.2058E+12
redo buffer allocation retries       694366
redo wastage                     1.7405E+10
redo writer latching time             33951
redo writes                        29366293
redo blocks written              4188779881
redo write time                    31616307
redo log space requests              295837
redo log space wait time           20253938
redo log switch interrupts                0
redo ordering marks               214083396
redo subscn max counts            456039291

SQL> select to_char(min(completion_time),'HH24:MI:SS DD.MM.YYYY')
  2  comptimeMin,
  3         to_char(max(completion_time),'HH24:MI:SS DD.MM.YYYY')
  4  comptimeMax,
  5  count(recid) ArchNo,
  6  round(( max(completion_time) - min(completion_time) ) * 24 * 60 ,2)
  7  log_switch_comp_timelag,
  8  round((( max(completion_time) - min(completion_time) ) * 24 * 60),2) /
  9  count(recid) log_switch_minutes,
 10  round(sum(blocks * block_size) /1024/1024/1024 ,2) GBRedo
 11  from
 12  V$ARCHIVED_LOG
 13  where completion_time between
 14  to_date('16.09.2010 00:00:09','DD.MM.YYYY HH24:MI:SS' ) and
 15  to_date('16.09.2010 23:59:59','DD.MM.YYYY HH24:MI:SS' );

COMPTIMEMIN         COMPTIMEMAX             ARCHNO LOG_SWITCH_COMP_TIMELAG LOG_SWITCH_MINUTES     GBREDO
------------------- ------------------- ---------- ----------------------- ------------------ ----------
00:05:05 16.09.2010 23:58:16 16.09.2010        317                 1433.18         4.52107256     457.67

SQL> SELECT * FROM V$LOG;

    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARCHIVED   STATUS              FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- ---------- ---------------- ---------------- ---------------
        11          1     136898 1572864000          2 YES        INACTIVE              16957556645 17-SEP-10
        12          1     136902 1572864000          2 YES        INACTIVE              16958397148 17-SEP-10
        13          1     136900 1572864000          2 YES        INACTIVE              16957867244 17-SEP-10
        14          1     136899 1572864000          2 YES        INACTIVE              16957644566 17-SEP-10
        15          1     136901 1572864000          2 YES        INACTIVE              16958089670 17-SEP-10
        16          1     136897 1572864000          2 YES        INACTIVE              16957473109 17-SEP-10
        17          1     136903 1572864000          2 YES        INACTIVE              16958696222 17-SEP-10
        18          1     136904 1572864000          2 NO         CURRENT               16959338566 17-SEP-10

Accepted Solutions (1)

Accepted Solutions (1)

audunlea_hansen
Active Participant
0 Kudos

Hi.

On those Oracle User Group conferances I've joined, speakers have said redosize 2-4G are fine when there are huge changes in db.

Moving redologs out on it's own disks with a raid level optimized for write would increase performance since it no longer affect or getting affected by I/O to datafiles.

When you increase SGA > approx 8GB, you would like to use HugePages since the page address-list in memory get too big and make the system using memory slower.

Are self running db's with 40GB SGA on HugePages.

Regards

Audun

Former Member
0 Kudos

Hello,

Thank you for your answer.

redo logs are already in their own RAID1 volume group, the filesystem I think about moving is the archived redo logs (vgsanBIP -> vgBIParch).

hostname # bdf /oracle/BIP/origlog* /oracle/BIP/mirrlog* /oracle/BIP/oraarch
Filesystem          kbytes    used   avail %used Mounted on
/dev/vgBIPredo/lvorigABIP
                   15695872 6186517 8915026   41% /oracle/BIP/origlogA
/dev/vgBIPredo/lvorigBBIP
                   15695872 6186517 8915026   41% /oracle/BIP/origlogB
/dev/vgBIPredo/lvmirrABIP
                   15695872 6164965 8935232   41% /oracle/BIP/mirrlogA
/dev/vgBIPredo/lvmirrBBIP
                   15695872 6164965 8935232   41% /oracle/BIP/mirrlogB
/dev/vgsanBIP/lvoraarchBIP
                   315916288 119782273 183875691   39% /oracle/BIP/oraarch

I could also create four VG for each redo log directory (vgBIPorigA, vgBIPorigB, vgBIPmirA & vgBIPmirB). With a RAID 1 SAN disk for each VG. I think that it could help too.

Our HP-UX system:

hostname # uname -a
HP-UX hostname B.11.31 U ia64 3650129742 unlimited-user license

By the way, Oracle version: 10.2.0.4 with patch bundle installed.

The actual size of our redo buffer (initial size is 14473216):

SQL> show parameter log_buffer;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_buffer                           integer     30976000

And the database parameters (following SAP recommendations):

*._db_block_numa=1
*._enable_NUMA_optimization=FALSE
*._fix_control='5705630:ON','5765456:3','6221403:ON','6329318:ON','6430500:ON','6440977:ON','6626018:ON','6670551:ON','6972291:ON','7325597:ON','7692248:ON','5099019:ON','6399597:ON','7891471:ON','9495669:ON'
*._optim_peek_user_binds=FALSE
*._optimizer_mjc_enabled=FALSE
*._second_spare_parameter=1
*._sort_elimination_cost_ratio=10
*.background_dump_dest='/oracle/BIP/saptrace/background'
*.compatible='10.2.0'
*.control_file_record_keep_time=30
*.control_files='/oracle/BIP/origlogA/cntrl/cntlrBIP.dbf','/oracle/BIP/origlogB/cntrl/cntrlBIP.dbf','/oracle/BIP/sapdata1/cntrl/cntrlBIP.dbf'
*.core_dump_dest='/oracle/BIP/saptrace/background'
*.db_block_size=8192
*.db_cache_size=13958643712
*.db_files=500
*.db_name='BIP'
*.event='10027 trace name context forever, level 1','10028 trace name context forever, level 1','10142 trace name context forever, level 1','10183 trace name context forever, level 1','10191 trace name context forever, level 1','10411 trace name context forever, level 1','10629 trace name context forever, level 32','14532 trace name context forever, level 1','38068 trace name context forever, level 100','38085 trace name context forever, level 1','38087 trace name context forever, level 1','44951 trace name context forever, level 1024','10753 trace name context forever, level 2'
*.filesystemio_options='setall'
*.hpux_sched_noage=178
*.log_archive_dest_1='LOCATION=/oracle/BIP/oraarch/BIParch'
*.log_archive_format='%t_%s_%r.dbf'
*.log_buffer=14473216
*.log_checkpoints_to_alert=TRUE
*.max_dump_file_size='20000'
*.open_cursors=2000
*.optimizer_dynamic_sampling=6
*.parallel_execution_message_size=16384
*.parallel_max_servers=40
*.parallel_threads_per_cpu=1
*.pga_aggregate_target=3758096384
*.processes=500
*.query_rewrite_enabled='FALSE'
*.recyclebin='off'
*.remote_os_authent=TRUE
*.replication_dependency_tracking=FALSE
*.sessions=1000
*.shared_pool_size=4294967296
*.star_transformation_enabled='TRUE'
*.undo_management='AUTO'
*.undo_retention=900
*.undo_tablespace='PSAPUNDO'
*.user_dump_dest='/oracle/BIP/saptrace/usertrace'

former_member204746
Active Contributor
0 Kudos

Bonjour!

if this is Ok with you, you can ask your BW functionals to not generate logging when adding/changing/deleting indexes.

This will reduce the quantity of redo logs. Downside is that if you restore from a backup and apply offline redologs,

all affected indexes will be not usable and you will need to rebuild each of them.

Former Member
0 Kudos

Hello,

We choose to start the drop index job before the loads and rebuild them all after it (RSPROCESS option). The problem is that the nologging option could lead to problems during sandboxes refreshes.

But you are right, this could really help in lowering the amount of data logged in the redo... Each night, the cumulative run of BI_PROCESS_DROPINDEX & BI_PROCESS_INDEX is around 18 hours...

Thank you.

lbreddemann
Active Contributor
0 Kudos

> We choose to start the drop index job before the loads and rebuild them all after it (RSPROCESS option). The problem is that the nologging option could lead to problems during sandboxes refreshes.

Hmm... well, this is only true if you don't adapt your system copy procedures to cover this.

Since the NOLOGGING corruption only occurs when indexes should be created by recovering the log entries, for system copies it's a very easy thing to just take an incremental backup of the primary machine and recvoer this to the sandbox.

By doing so, you won't get any NOLOGGING corruptions.

This is of course not an option, when you want to run a standby-database...

regards,

Lars

Former Member
0 Kudos

Hello Lars,

We don't run a standby database, the nologging option could be used, indeed, but I would like to do all I can without impacting validated procedures, etc.

I'll keep this solution in mind

Thank you.

volker_borowski2
Active Contributor
0 Kudos

Hi,

well, v$log, v$logfile actually do not say anything about the sequence in which the logs do actually switch.

You need to check the alert log for that.

As I said in the other thread, oracle takes the next availabe group on purpose, so this might not be static.

So if you create i.e. a new group, the next switch will jump into it, no matter in which group you are in.

For the archivers the parameter is log_archive_max_porcesses (can be "alter system"ed).

Best regards

Volker

Former Member
0 Kudos

Hello Volker,

Indeed, I made a copy/paste error

SQL> show parameter db_writer_processes;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_writer_processes                  integer     2
SQL> show parameter '_lgwr_io_slaves';
SQL> show parameter log_archive_max_processes;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_max_processes            integer     2

About the redo log order, you can find it using the v$log and v$logfile! As you can see in the following output, I sort the result on the sequence number, it is the order the redo files were used until now (the current log):

SQL> SELECT V$LOG.STATUS, V$LOG.GROUP#, V$LOG.SEQUENCE#, V$LOGFILE.MEMBER FROM V$LOG, V$LOGFILE WHERE V$LOG.GROUP# = V$LOGFILE.GROUP# ORDER BY V$LOG.SEQUENCE# ASC;

STATUS               GROUP#  SEQUENCE# MEMBER
---------------- ---------- ---------- --------------------------------------------------
INACTIVE                  1     137505 /oracle/BIP/origlogA/log_g1m1.dbf
INACTIVE                  1     137505 /oracle/BIP/mirrlogA/log_g1m2.dbf
INACTIVE                  2     137506 /oracle/BIP/origlogB/log_g2m1.dbf
INACTIVE                  2     137506 /oracle/BIP/mirrlogB/log_g2m2.dbf
INACTIVE                  3     137507 /oracle/BIP/origlogA/log_g3m1.dbf
INACTIVE                  3     137507 /oracle/BIP/mirrlogA/log_g3m2.dbf
INACTIVE                  4     137508 /oracle/BIP/origlogB/log_g4m1.dbf
INACTIVE                  4     137508 /oracle/BIP/mirrlogB/log_g4m2.dbf
INACTIVE                  5     137509 /oracle/BIP/origlogA/log_g5m1.dbf
INACTIVE                  5     137509 /oracle/BIP/mirrlogA/log_g5m2.dbf
INACTIVE                  6     137510 /oracle/BIP/origlogB/log_g6m1.dbf
INACTIVE                  6     137510 /oracle/BIP/mirrlogB/log_g6m2.dbf
INACTIVE                  7     137511 /oracle/BIP/origlogA/log_g7m1.dbf
INACTIVE                  7     137511 /oracle/BIP/mirrlogA/log_g7m2.dbf
INACTIVE                  8     137512 /oracle/BIP/origlogB/log_g8m1.dbf
INACTIVE                  8     137512 /oracle/BIP/mirrlogB/log_g8m2.dbf
INACTIVE                  9     137513 /oracle/BIP/origlogA/log_g9m1.dbf
INACTIVE                  9     137513 /oracle/BIP/mirrlogA/log_g9m2.dbf
INACTIVE                 10     137514 /oracle/BIP/origlogB/log_g10m1.dbf
INACTIVE                 10     137514 /oracle/BIP/mirrlogB/log_g10m2.dbf
ACTIVE                   11     137515 /oracle/BIP/origlogA/log_g11m1.dbf
ACTIVE                   11     137515 /oracle/BIP/mirrlogA/log_g11m2.dbf
CURRENT                  12     137516 /oracle/BIP/origlogB/log_g12m1.dbf
CURRENT                  12     137516 /oracle/BIP/mirrlogB/log_g12m2.dbf

24 rows selected.

So, current log group is the 12nd, the 11st was used just before and 10th before, etc.

You can use the first change column to check that too.

Former Member
0 Kudos

Hello Lars, I think I was wrong, please receive my apologies The NOLOGGING option is set by default in the BI_PROCESS_INDEX using RSPROCESS :

SQL: 22.09.2010 00:28:04 GSK_BATCH
 CREATE BITMAP INDEX "SAPSR3"."/BIC/FZCO_C02~010"
ON "/BIC/FZCO_C02" ( "KEY_ZCO_C02P" ) PCTFREE 10
INITRANS               2 MAXTRANS 255 STORAGE (
INITIAL 16 K NEXT 2560 K MINEXTENTS 1 MAXEXTENTS
UNLIMITED PCTINCREASE 0 FREELISTS 4 FREELIST
GROUPS 1 BUFFER_POOL DEFAULT ) LOCAL PARALLEL
TABLESPACE "PSAPSR3" NOLOGGING COMPUTE STATISTIC
SQL-END: 22.09.2010 00:28:12 00:00:08

There is no LOGGING parameter set in RSADMIN.

I found the following SAP note to disable the NOLOGGING option during indexes rebuild but I didn't find something to configure to add NOLOGGING in other processes.

Our drop indexes and rebuild processes are still very long (some hours). I hope that the pga & sga size increase will help.

lbreddemann
Active Contributor
0 Kudos

> Hello Lars, I think I was wrong, please receive my apologies The NOLOGGING option is set by default in the BI_PROCESS_INDEX using RSPROCESS :

>

> There is no LOGGING parameter set in RSADMIN.

Right

> I found the following SAP note to disable the NOLOGGING option during indexes rebuild but I didn't find something to configure to add NOLOGGING in other processes.

Well, the reason for that is: you really don't want this.

Think about it: if you use NOLOGGING for other processes as well, then you cannot recover without the loss of data.

If it's just used for indexes, this is not a problem, but if you use it for data loading as well - then data protection is out of the window.

> Our drop indexes and rebuild processes are still very long (some hours). I hope that the pga & sga size increase will help.

Actually I'm a bit surprised not to see any wait time analysis here.

What makes you think that the REDO generation was the major contributor to the index rebuild runtime?

Why do you now believe that more SGA and PGA would change anything about it?

Why don't you check the wait events that occur for the index rebuild session?

Maybe the I/O is bottlenecking...

Maybe the buffer cache should be larger...

Until now, it's more or less just guesswork.

regards,

Lars

Former Member
0 Kudos

Hello,

Effectively, I don't want data loss during recovery, I talked about other processes thinking drop indexes as the logging option in rsadmin covers only the NOLOGGING during the rebuild.

I know that there was a redo log problem,, I wanted to solve it first. Th situation on this side is really better, I start investigations with AWR reports now. Tuning (or at least increasing poor performances) is not really easy, I had to start with something 'simple' on this BI instance.

About the wait events, I already checked them during loads and index drop/rebuild, the problem is that it takes all the night, it's quite difficult to trace indexes specific problems.

For example, during a very long drop index, I can find that the main wait event is data block.

21.09.2010 23:10:05 SQL: 21.09.2010 23:10:05 GSK_BATCH
21.09.2010 23:10:05 DROP INDEX "/BIC/FZPM_C04~090"
22.09.2010 01:27:29 SQL-END: 22.09.2010 01:27:29 02:17:24

Buffer Wait Statistics

    * ordered by wait time desc, waits desc 

Class	Waits	Total Wait Time (s)	Avg Time (ms)
data block 	74,668 	5,223 	70
undo header 	41,695 	1,818 	44
file header block 	500 	482 	963
1st level bmb 	67,071 	285 	4
segment header 	4,705 	86 	18
undo block 	980 	24 	25
2nd level bmb 	1,032 	11 	11

lbreddemann
Active Contributor
0 Kudos

> I know that there was a redo log problem,, I wanted to solve it first. Th situation on this side is really better, I start investigations with AWR reports now. Tuning (or at least increasing poor performances) is not really easy, I had to start with something 'simple' on this BI instance.

That's what most people try to do.

Of course every solved issue is a good one, but it distracts from your original problem.

> About the wait events, I already checked them during loads and index drop/rebuild, the problem is that it takes all the night, it's quite difficult to trace indexes specific problems.

AWR is your friend with that.

> For example, during a very long drop index, I can find that the main wait event is data block.

>

>

21.09.2010 23:10:05 SQL: 21.09.2010 23:10:05 GSK_BATCH
> 21.09.2010 23:10:05 DROP INDEX "/BIC/FZPM_C04~090"
> 22.09.2010 01:27:29 SQL-END: 22.09.2010 01:27:29 02:17:24

>

>

Buffer Wait Statistics
> 
>     * ordered by wait time desc, waits desc 
> 
> Class	Waits	Total Wait Time (s)	Avg Time (ms)
> data block 	74,668 	5,223 	70
> undo header 	41,695 	1,818 	44
> file header block 	500 	482 	963
> 1st level bmb 	67,071 	285 	4
> segment header 	4,705 	86 	18
> undo block 	980 	24 	25
> 2nd level bmb 	1,032 	11 	11

But there you already do have very good hints:

-> avg wait time for a data block 70 ms ??? This is ridiculous.

Should be something around 10ms on avg.

So I'd focus on I/O performance of this system if this was my tuning task.

regards,

Lars

Answers (3)

Answers (3)

Former Member
0 Kudos

Now I try to tune the system using the (useful) AWR report. I generate reports for each day starting at 00:00 and finishing at 08:00 to monitor the system load during the BI loads..

SQL> @$ORACLE_HOME/rdbms/admin/awrsqrpt.sql

Following the first reports, I already increased the pga_aggregate_target to 7GB and the shared_pool_size to 7GB. Those changes will be active next week. In the meantime I'll continue to check the AWR and redo stats.

Next week this instance will be migrated to a new HSV8000 with a complete 8GB fiber infrastructure and we plan to add a SSD group to icrease I/O perfs for the 4 future LUNs dedicated to the 4 redo logs directories. This should help.

In the meantime, I fond in the AWR report that the amount of Buffer waits are mainly on pure BI data blocks access, the sga/pga size increase will help, bu this is an I/O bottleneck, I don't think we'll be able to perform really useful changes...

Buffer Wait Statistics

    * ordered by wait time desc, waits desc 

Class			Waits		Total Wait Time (s)		Avg Time (ms)
data block 		199,779 	9,510 					48
undo header 		57,695 		3,409 					59
file header block 	3,151 		3,013 					956
1st level bmb 		80,227 		322 					4
segment header 		6,609 		147 					22
undo block 		3,475 		36 					10
2nd level bmb 		1,796 		15 					8
bitmap index block 	5 		0 					2

Segments by Buffer Busy Waits

    * % of Capture shows % of Buffer Busy Waits for each top segment compared
    * with total Buffer Busy Waits for all segments captured by the Snapshot 

Owner	Tablespace Name	Object Name		Subobject Name	Obj. Type		Buffer Busy Waits	% of Capture
SAPSR3 	PSAPSR3 	/BIC/B0002149000 	0000000147 	TABLE PARTITION 	22,629 			15.86
SAPSR3 	PSAPSR3 	/BIC/AZPM_O01140 	  		TABLE 			19,882 			13.93
SAPSR3 	PSAPSR3 	/BIC/AZPM_O02040 	  		TABLE 			10,809 			7.57
SAPSR3 	PSAPSR3 	NRIV 	  				TABLE 			10,586 			7.42
SAPSR3 	PSAPSR3 	/BIC/B0001757000 	0000000002 	TABLE PARTITION 	4,570 			3.20

I'll continue to keep this thread updated

Former Member
0 Kudos

Hello,

So I performed some changes on the system last weekend.

As there was a levelling pending on the EVA, the storage team was not able to create me the new LUNs to split the redo log volume group and add a new volume for archived redo logs.

So, in a first time, I deleted the sga_max_size parameter, increased open_cursors to the max (2000) and increased db_cache_size to 15G.

Then I rebuilt completely the redo log architecture. Each log is now 2GB and there are 12 groups of 2 files:

SQL> SELECT * FROM V$LOG;

    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARCHIVED   STATUS              FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- ---------- ---------------- ---------------- ---------------
         1          1     137349 2147483648          2 YES        INACTIVE              17152070527 21-SEP-10
         2          1     137350 2147483648          2 YES        INACTIVE              17152209517 21-SEP-10
         3          1     137351 2147483648          2 YES        INACTIVE              17152423138 21-SEP-10
         4          1     137352 2147483648          2 YES        INACTIVE              17152633692 21-SEP-10
         5          1     137353 2147483648          2 YES        INACTIVE              17152784204 21-SEP-10
         6          1     137354 2147483648          2 YES        INACTIVE              17152934116 21-SEP-10
         7          1     137355 2147483648          2 YES        INACTIVE              17153072076 21-SEP-10
         8          1     137356 2147483648          2 YES        INACTIVE              17153173517 21-SEP-10
         9          1     137357 2147483648          2 YES        INACTIVE              17153295749 21-SEP-10
        10          1     137358 2147483648          2 NO         CURRENT               17153473803 21-SEP-10
        11          1     137347 2147483648          2 YES        INACTIVE              17151772359 21-SEP-10
        12          1     137348 2147483648          2 YES        INACTIVE              17151933507 21-SEP-10

12 rows selected.

SQL> SELECT V$LOG.STATUS, V$LOG.GROUP#, V$LOG.SEQUENCE#, V$LOGFILE.MEMBER FROM V$LOG, V$LOGFILE WHERE V$LOG.GROUP# = V$LOGFILE.GROUP# ORDER BY V$LOG.SEQUENCE# ASC;

STATUS               GROUP#  SEQUENCE# MEMBER
---------------- ---------- ---------- --------------------------------------------------
INACTIVE                 11     137347 /oracle/BIP/origlogA/log_g11m1.dbf
INACTIVE                 11     137347 /oracle/BIP/mirrlogA/log_g11m2.dbf
INACTIVE                 12     137348 /oracle/BIP/mirrlogB/log_g12m2.dbf
INACTIVE                 12     137348 /oracle/BIP/origlogB/log_g12m1.dbf
INACTIVE                  1     137349 /oracle/BIP/origlogA/log_g1m1.dbf
INACTIVE                  1     137349 /oracle/BIP/mirrlogA/log_g1m2.dbf
INACTIVE                  2     137350 /oracle/BIP/origlogB/log_g2m1.dbf
INACTIVE                  2     137350 /oracle/BIP/mirrlogB/log_g2m2.dbf
INACTIVE                  3     137351 /oracle/BIP/origlogA/log_g3m1.dbf
INACTIVE                  3     137351 /oracle/BIP/mirrlogA/log_g3m2.dbf
INACTIVE                  4     137352 /oracle/BIP/origlogB/log_g4m1.dbf
INACTIVE                  4     137352 /oracle/BIP/mirrlogB/log_g4m2.dbf
INACTIVE                  5     137353 /oracle/BIP/origlogA/log_g5m1.dbf
INACTIVE                  5     137353 /oracle/BIP/mirrlogA/log_g5m2.dbf
INACTIVE                  6     137354 /oracle/BIP/origlogB/log_g6m1.dbf
INACTIVE                  6     137354 /oracle/BIP/mirrlogB/log_g6m2.dbf
INACTIVE                  7     137355 /oracle/BIP/mirrlogA/log_g7m2.dbf
INACTIVE                  7     137355 /oracle/BIP/origlogA/log_g7m1.dbf
INACTIVE                  8     137356 /oracle/BIP/origlogB/log_g8m1.dbf
INACTIVE                  8     137356 /oracle/BIP/mirrlogB/log_g8m2.dbf
INACTIVE                  9     137357 /oracle/BIP/origlogA/log_g9m1.dbf
INACTIVE                  9     137357 /oracle/BIP/mirrlogA/log_g9m2.dbf
CURRENT                  10     137358 /oracle/BIP/origlogB/log_g10m1.dbf
CURRENT                  10     137358 /oracle/BIP/mirrlogB/log_g10m2.dbf

24 rows selected.

As you can see, redo log order is now correct ({orig|mirr}logA alternate with {orig|mirr}logB).

I checked the alert log to find 'Checkpoint not complete' errors. I find 10 last week. I hope that the increase of the group count and redo size ill help.

volker_borowski2
Active Contributor
0 Kudos

Jippie !!

Someone else with such a big beast!

I just recently told about mine:

You did not mention the number of DB-WR and archiver procs you have configured.

Do you have any waits in the alert log ?

Like

... checkpoint not complete ?

--> More groups, more filesystems, striping, more db_wr

... private strand flush not complete

--> more db_wr

... all logs need archiving

--> more archiver procs ?

I do not know if it might help in your case, but the log_buffer with 14M looks somewhat standard.

Might help to double it, allthough standard recommendation is to delete it and let oracle sort it out.

Volker

Oh, just seen you are on HP-UX!

You might need to crosscheck, if multiple db_wr are allowed in this case, I don't know for HP-UX.

Multiple archivers should be no prob, but I recommend to crosscheck that as well.

Edited by: Volker Borowski on Sep 17, 2010 3:26 PM

Former Member
0 Kudos

Hello Volker,

About your questions:

For DBWn, LGWR and ARCn, we use the default:

SQL> show parameter db_writer_processes;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_writer_processes                  integer     2
SQL> show parameter '_lgwr_io_slaves';
SQL> show parameter db_writer_processes;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_writer_processes                  integer     2

We had 13 Checkpoint not complete in Augustus and no Private strand flush not complete. I think we won't find those errors anymore since we increase the group count and size of redo logs.

The initial size of the log_buffer is set in the spfile but Oracle already manages it:

hostname:orabip 27> strings spfileBIP.ora | egrep -i log_buffer
*.log_buffer=14473216

SQL> show parameter log_buffer;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_buffer                           integer     30976000

Multiple DBWn are allowed on HP-UX, we use SAP recommended parameters notes (automated parameters check using script, etc.).