cancel
Showing results for 
Search instead for 
Did you mean: 

checkpoint not completed

Former Member
0 Kudos

Hi All

In production system i see the following warning frequently in db13

BR0976W Database message alert - level: WARNING, line: 5447966, time: 2008-11-11 06.01.03, message:

Checkpoint not complete

BR0976W Database message alert - level: WARNING, line: 5447985, time: 2008-08-18 06.03.14, message:

Checkpoint not complete.

we have 8 groups of each 250 mb.

GROUP# BYTES

-


-


21 262144000

22 262144000

23 262144000

24 262144000

25 262144000

26 262144000

27 262144000

28 262144000

db_writer_processes= 3

fast_start_mttr_target= 0

System information:

sapnetweaver bi 7

Oracle 10.2.0.2.0

HP-UX

I have gone through the couple of note still i coudnt find the solution.please help me in resolving the issue.

Thanks in advances

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

The message 'Checkpoint not complete' indicates that the database had to wait for the ongoing checkpoint to be completed before reusing the next online redo log file. This situation causes the database to freeze up until the checkpoint completes.

Follow Note 79341 - Checkpoint not complete

cheers,

-Sunil

Answers (1)

Answers (1)

former_member204746
Active Contributor
0 Kudos

1. create additional groups (I would go to 12)

2. increase size of each group to 400MB

Former Member
0 Kudos

Hi Eric

Thanks for the quick reply

how to find 'how fast the log is switching' ?

former_member204746
Active Contributor
0 Kudos

check alert_SID.log file from directory saptrace/background

stefan_koehler
Active Contributor
0 Kudos
fidel_vales
Employee
Employee
0 Kudos

hi,

I use the following to take a look at how many log switches happen per hour.

With it you can see your peak times and you can figure out if you need to increase the log size and how much

set pagesize 1000

set linesize 160

select

to_char(first_time,'YYYY-MM-DD') day,

to_char(first_time,'DY') weekday,

to_char(sum(decode(substr(first_time,10,2),'00',1,0)),'99') "00",

to_char(sum(decode(substr(first_time,10,2),'01',1,0)),'99') "01",

to_char(sum(decode(substr(first_time,10,2),'02',1,0)),'99') "02",

to_char(sum(decode(substr(first_time,10,2),'03',1,0)),'99') "03",

to_char(sum(decode(substr(first_time,10,2),'04',1,0)),'99') "04",

to_char(sum(decode(substr(first_time,10,2),'05',1,0)),'99') "05",

to_char(sum(decode(substr(first_time,10,2),'06',1,0)),'99') "06",

to_char(sum(decode(substr(first_time,10,2),'07',1,0)),'99') "07",

to_char(sum(decode(substr(first_time,10,2),'08',1,0)),'99') "08",

to_char(sum(decode(substr(first_time,10,2),'09',1,0)),'99') "09",

to_char(sum(decode(substr(first_time,10,2),'10',1,0)),'99') "10",

to_char(sum(decode(substr(first_time,10,2),'11',1,0)),'99') "11",

to_char(sum(decode(substr(first_time,10,2),'12',1,0)),'99') "12",

to_char(sum(decode(substr(first_time,10,2),'13',1,0)),'99') "13",

to_char(sum(decode(substr(first_time,10,2),'14',1,0)),'99') "14",

to_char(sum(decode(substr(first_time,10,2),'15',1,0)),'99') "15",

to_char(sum(decode(substr(first_time,10,2),'16',1,0)),'99') "16",

to_char(sum(decode(substr(first_time,10,2),'17',1,0)),'99') "17",

to_char(sum(decode(substr(first_time,10,2),'18',1,0)),'99') "18",

to_char(sum(decode(substr(first_time,10,2),'19',1,0)),'99') "19",

to_char(sum(decode(substr(first_time,10,2),'20',1,0)),'99') "20",

to_char(sum(decode(substr(first_time,10,2),'21',1,0)),'99') "21",

to_char(sum(decode(substr(first_time,10,2),'22',1,0)),'99') "22",

to_char(sum(decode(substr(first_time,10,2),'23',1,0)),'99') "23"

from v$log_history

group by

to_char(first_time,'YYYY-MM-DD'),

to_char(first_time,'DY')

order by

to_char(first_time,'YYYY-MM-DD') desc;

You can play a little with it