on 04-29-2009 11:09 AM
Dear All
I am facing a problem of getting oracle archive (oraarch) generated very frequently nearly 50MB file in less than a minute due to which my Filesystem capacity of Oraarch is getting full soon.
I have monitored there is no any batch job, Upload job, or any loaded job going even in Idle time in late night the same frequency of Archival is going so in a duration of 6-7 hours i am getting the total archive of 50GB.
I have Oracle 10g Patch Set 10.2.0.4.0 and all the 39 Patches & 2 CPU Patches applied by Opatch & MOPatch and all the Oracle 10g Parameter are set as per the note 830576.
Can you please let me know what can be the issue?
Thanks & Best Regards
Rahul
maybe your DB is in backup mode, which generated more redologs than normal.
sqlplus / as sysdba
alter database end backup;
exit;
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You probably have statistics being populated and AWR running. Oracle 10g has system jobs that are set to run frequently (i.e. every hour, every night, etc).
column repeat_interval format a70
select job_name, repeat_interval, enabled from dba_scheduler_jobs;
column what format a50 word_wrapped
select job, last_date, next_date, broken, what from dba_jobs;
50mb is probably too small for your environment. You'll probably want to at least double that... at least.
And if your oraarch is filling up so frequently you should consider increasing the space of oraarch. It sounds like your system is much busier than you expected.
Run this command to get an idea of what Oracle recommends for your redo log file sizes...
SELECT TARGET_MTTR,ESTIMATED_MTTR,WRITES_MTTR,WRITES_LOGFILE_SIZE, OPTIMAL_LOGFILE_SIZE FROM V$INSTANCE_RECOVERY;
If that doesn't work it's probably because you're not using the system parameter fast_start_mttr_target... which I guess is something SAP advises against using? I'm an Oracle DBA but a SAP newbie so I'm still learning what parameters SAP wants me to set and why
Hope that helps..
Rich
Hello Rahul,
there is an easy way to find out what causes the massive redo log data. The "tool" that you can use is called "Oracle Log Miner" .. with this tool you can take a look inside of the online redolog / offline (=archive) redolog files.
Sapnote #1128990 explains howto handle this PL/SQL package and you can also check the presentation of Martin Frauendorfer:
Regards
Stefan
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Rahul,
Can you check in DB12 How frequently a log swith occur. You can increase log switch time by increasing size of redo log file. You dont need any downtime to increase the size of redo log file. Noramally in 10g size of redo log is 90 MB by default. Have a look at below SAP notes.
309526 Enlarging redo log files
584548 Unusually high number of redo logs
863417 FAQ: Database Archive modes and redo logs
Hope this helps you.
Thanks
Sushil
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
Please let us know whcih system (R3/Bi SRM), if this is BI system check whether any loads are running.
If this is the R3 system check any user is updating data to a table.
Regards,
Phani.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
84 | |
10 | |
10 | |
10 | |
7 | |
6 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.