cancel
Showing results for 
Search instead for 
Did you mean: 

SAP BACKUP issue

Former Member
0 Kudos

Dear Experts ,

We are using Data Protector for SAP backups . We are facing one issue when ever sechuled  the backup throught DB13 (online completed DB Backup  + redologs)  with sd option . backupa are  completing successfuly but archive log is going multible tapes  . Please find attached screen shot .

Pls suggest me.

Accepted Solutions (0)

Answers (3)

Answers (3)

0 Kudos

Dear Vijay,

You should do an online consistent backup.

This is the way to follow to get archlog backed up on the same tape as the database.

In your command, you should get "online_cons" parameter somewhere.

Example : brbackup -u / -p init<SID>.sap -r init<SID>.utl -t online_cons -m all

Hope it helps.

Stéphan

former_member188883
Active Contributor
0 Kudos

Hi Vijay,

Please check your init<SID>.sap util file for parameter

exec_parallel = <no of parallel tapes>


Example:

backup_dev_type = tape

- exec_parallel = 4

- tape_address = (tape1, tape2, tape3, tape4)

- tape_address_rew = (tape1, tape2, tape3, tape4)

Change the value of parameter exec_parallel = 0 or 1 and test the results.


Hope this helps.


Regards,

Deepak Kori

Former Member
0 Kudos

Hi Deepak ,

Thanks for your response.

The parameter present value is 0 ( exec_parallel=0) .

Already the parameter is enabled

Regards,

Vijay.K

former_member188883
Active Contributor
0 Kudos

Hi Vijay,

What is the database size of your system ?

What is the value set for parameter tape_size

Hope this helps.

Regards,

Deepak Kori

Former Member
0 Kudos

Dear Deepak ,

Thanks for your support

The current database total size 1088288.320 MB .

The current parameter value is tape_size = 100G

Regards,

Vijay.K

former_member188883
Active Contributor
0 Kudos

Hi Vijay,

Existing DB size from your information is 1TB and you have defined tape_size as 100G. With this configuration backup will be disributed into multiple tapes automatically.

Can you test it using tape_size = 1000G

Also what is the size of the media tape used ?

Hope this helps.

Regards,

Deepak Kori

Former Member
0 Kudos

Dear Deepak ,

Thank for quick resp .

Database backup is happening in one tape it self . but archive taking in multible tape .

it is happening only when we using (online completed DB Backup  + redologs) .

If we trigger archive alone it is going one tape itself .

Note : remaing system are having same parameters which we implemented for this system and remaining system are working fine .

Thanks & Regards,

Vijay.K

former_member188883
Active Contributor
0 Kudos

Hi Vijay,

Please check the HP data protector template configuration for the issue with Full DB + log backup

Regards,

Deepak Kori

Former Member
0 Kudos

Dear Deepak ,

If we trigger from DP (online completed DB Backup  + redologs) .. it is taking only one TAPE .

Issue is when ever we trigger from DB12 . archive logs are  storing differenet tapes.

Regards,

VIjay.K

former_member188883
Active Contributor
0 Kudos

Hi Vijay,

Please attach init<SID>.sap as well as util file used for DP configuration.

Regards,

Deepak Kori

Former Member
0 Kudos

Dear Deppak ,

InitSID.sap

# @(#) $Id: //bas/720_REL/src/ccm/rsbr/initLIN.sap#11 $ SAP


########################################################################


#                                                                      #


# SAP BR*Tools sample profile.                                         #


# The parameter syntax is the same as for init.ora parameters.         #


# Enclose parameter values which consist of more than one symbol in    #


# double quotes.                                                       #


# After any symbol, parameter definition can be continued on the next  #


# line.                                                                #


# A parameter value list should be enclosed in parentheses, the list   #


# items should be delimited by commas.                                 #


# There can be any number of white spaces (blanks, tabs and new lines) #


# between symbols in parameter definition.                             #


# Comment lines must start with a hash character.                      #


#                                                                      #


########################################################################


# backup mode [all | all_data | full | incr | sap_dir | ora_dir


# | all_dir | <tablespace_name> | <file_id> | <file_id1>-<file_id2>


# | <generic_path> | (<object_list>)]


# default: all


backup_mode = all


# restore mode [all | all_data | full | incr | incr_only | incr_full


# | incr_all | <tablespace_name> | <file_id> | <file_id1>-<file_id2>


# | <generic_path> | (<object_list>) | partial | non_db


# redirection with '=' is not supported here - use option '-m' instead


# default: all


restore_mode = all


# backup type [offline | offline_force | offline_standby | offline_split


# | offline_mirror | offline_stop | online | online_cons | online_split


# | online_mirror | online_standby | offstby_split | offstby_mirror


# default: offline


backup_type = online


# backup device type


# [tape | tape_auto | tape_box | pipe | pipe_auto | pipe_box | disk


# | disk_copy | disk_standby | stage | stage_copy | stage_standby


# | util_file | util_file_online | util_vol | util_vol_online


# | rman_util | rman_disk | rman_stage | rman_prep]


# default: tape


backup_dev_type = util_file


# backup root directory [<path_name> | (<path_name_list>)]


# default: $SAPDATA_HOME/sapbackup


backup_root_dir = /backup1/sapbackup


# stage root directory [<path_name> | (<path_name_list>)]


# default: value of the backup_root_dir parameter


stage_root_dir = /backup1/sapbackup


# compression flag [no | yes | hardware | only | brtools]


# default: no


compress = no


# compress command


# first $-character is replaced by the source file name


# second $-character is replaced by the target file name


# <target_file_name> = <source_file_name>.Z


# for compress command the -c option must be set


# recommended setting for brbackup -k only run:


# "compress -b 12 -c $ > $"


# no default


compress_cmd = "gzip -c $ > $"


# uncompress command


# first $-character is replaced by the source file name


# second $-character is replaced by the target file name


# <source_file_name> = <target_file_name>.Z


# for uncompress command the -c option must be set


# no default


uncompress_cmd = "gunzip -c $ > $"


# directory for compression [<path_name> | (<path_name_list>)]


# default: value of the backup_root_dir parameter


compress_dir = /oracle/HPR/sapreorg


# brarchive function [save | second_copy | double_save | save_delete


# | second_copy_delete | double_save_delete | copy_save


# | copy_delete_save | delete_saved | delete_copied]


# default: save


archive_function = save


# directory for archive log copies to disk


# default: first value of the backup_root_dir parameter


archive_copy_dir = /backup1/sapbackup


# directory for archive log copies to stage


# default: first value of the stage_root_dir parameter


archive_stage_dir = /backup1/sapbackup


# delete archive logs from duplex destination [only | no | yes | check]


# default: only


# archive_dupl_del = only


# new sapdata home directory for disk_copy | disk_standby


# no default


# new_db_home = /oracle/C11


# stage sapdata home directory for stage_copy | stage_standby


# default: value of the new_db_home parameter


# stage_db_home = /oracle/C11


# original sapdata home directory for split mirror disk backup


# no default


# orig_db_home = /oracle/C11


# remote host name


# no default


# remote_host = <host_name>


# remote user name


# default: current operating system user


# remote_user = <user_name>


# tape copy command [cpio | cpio_gnu | dd | dd_gnu | rman | rman_gnu


# | rman_dd | rman_dd_gnu | brtools | rman_brt]


# default: cpio


tape_copy_cmd = cpio


# disk copy command [copy | copy_gnu | dd | dd_gnu | rman | rman_gnu


# | rman_set | rman_set_gnu | ocopy]


# ocopy - only on Windows


# default: copy


disk_copy_cmd = copy


# stage copy command [rcp | scp | ftp | wcp]


# wcp - only on Windows


# default: rcp


stage_copy_cmd = rcp


# pipe copy command [rsh | ssh]


# default: rsh


pipe_copy_cmd = rsh


# flags for cpio output command


# default: -ovB


cpio_flags = -ovcB


# flags for cpio input command


# default: -iuvB


cpio_in_flags = -iuvcB


# flags for cpio command for copy of directories to disk


# default: -pdcu


# use flags -pdu for gnu tools


cpio_disk_flags = -pdcu


# flags for dd output command


# default: "obs=16k"


# recommended setting:


# Unix:    "obs=nk bs=nk", example: "obs=64k bs=64k"


# Windows: "bs=nk",        example: "bs=64k"


dd_flags = "obs=64k bs=64k"


# flags for dd input command


# default: "ibs=16k"


# recommended setting:


# Unix:    "ibs=nk bs=nk", example: "ibs=64k bs=64k"


# Windows: "bs=nk",        example: "bs=64k"


dd_in_flags = "ibs=64k bs=64k"


# number of members in RMAN save sets [ 1 | 2 | 3 | 4 | tsp | all ]


# default: 1


saveset_members = 1


# additional parameters for RMAN


# following parameters are relevant only for rman_util, rman_disk or


# rman_stage: rman_channels, rman_filesperset, rman_maxsetsize,


# rman_pool, rman_copies, rman_proxy, rman_parms, rman_send


# rman_maxpiecesize can be used to split an incremental backup saveset


# into multiple pieces


# rman_channels defines the number of parallel sbt channel allocations


# rman_filesperset = 0 means:


# one file per save set - for non-incremental backups


# up to 64 files in one save set - for incremental backups


# the others have the same meaning as for native RMAN


# rman_channels = 1


# rman_filesperset = 0


# rman_maxopenfiles = 0


# rman_maxsetsize = 0      # n[K|M|G] in KB (default), in MB or in GB


# rman_maxpiecesize = 0    # n[K|M|G] in KB (default), in MB or in GB


# rman_sectionsize = 0     # n[K|M|G] in KB (default), in MB or in GB


# rman_rate = 0            # n[K|M|G] in KB (default), in MB or in GB


# rman_diskratio = 0


# rman_duration = 0        # <min> - for minimizing disk load


# rman_keep = 0            # <days> - retention time


# rman_pool = 0


# rman_copies = 0 | 1 | 2 | 3 | 4


# rman_proxy = no | yes | only


# rman_parms = "BLKSIZE=65536 ENV=(BACKUP_SERVER=HOSTNAME)"


# rman_send = "'<command>'"


# rman_send = ("channel sbt_1 '<command1>' parms='<parameters1>'",


#              "channel sbt_2 '<command2>' parms='<parameters2>'")


# rman_compress = no | yes


# rman_maxcorrupt = (<dbf_name>|<dbf_id>:<corr_cnt>, ...)


# rman_cross_check = none | archive | arch_force


# remote copy-out command (backup_dev_type = pipe)


# $-character is replaced by current device address


# no default


copy_out_cmd = "dd ibs=8k obs=64k of=$"


# remote copy-in command (backup_dev_type = pipe)


# $-character is replaced by current device address


# no default


copy_in_cmd = "dd ibs=64k obs=8k if=$"


# rewind command


# $-character is replaced by current device address


# no default


# operating system dependent, examples:


# HP-UX:   "mt -f $ rew"


# TRU64:   "mt -f $ rewind"


# AIX:     "tctl -f $ rewind"


# Solaris: "mt -f $ rewind"


# Windows: "mt -f $ rewind"


# Linux:   "mt -f $ rewind"


rewind = "mt -f $ rewind"


# rewind and set offline command


# $-character is replaced by current device address


# default: value of the rewind parameter


# operating system dependent, examples:


# HP-UX:   "mt -f $ offl"


# TRU64:   "mt -f $ offline"


# AIX:     "tctl -f $ offline"


# Solaris: "mt -f $ offline"


# Windows: "mt -f $ offline"


# Linux:   "mt -f $ offline"


rewind_offline = "mt -f $ offline"


# tape positioning command


# first $-character is replaced by current device address


# second $-character is replaced by number of files to be skipped


# no default


# operating system dependent, examples:


# HP-UX:   "mt -f $ fsf $"


# TRU64:   "mt -f $ fsf $"


# AIX:     "tctl -f $ fsf $"


# Solaris: "mt -f $ fsf $"


# Windows: "mt -f $ fsf $"


# Linux:   "mt -f $ fsf $"


tape_pos_cmd = "mt -f $ fsf $"


# mount backup volume command in auto loader / juke box


# used if backup_dev_type = tape_box | pipe_box


# no default


# mount_cmd = "<mount_cmd> $ $ $ [$]"


# dismount backup volume command in auto loader / juke box


# used if backup_dev_type = tape_box | pipe_box


# no default


# dismount_cmd = "<dismount_cmd> $ $ [$]"


# split mirror disks command


# used if backup_type = offline_split | online_split | offline_mirror


# | online_mirror


# no default


# split_cmd = "<split_cmd> [$]"


# resynchronize mirror disks command


# used if backup_type = offline_split | online_split | offline_mirror


# | online_mirror


# no default


# resync_cmd = "<resync_cmd> [$]"


# additional options for SPLITINT interface program


# no default


# split_options = "<split_options>"


# resynchronize after backup flag [no | yes]


# default: no


# split_resync = no


# pre-split command


# no default


# pre_split_cmd = "<pre_split_cmd>"


# post-split command


# no default


# post_split_cmd = "<post_split_cmd>"


# pre-shut command


# no default


# pre_shut_cmd = "<pre_shut_cmd>"


# post-shut command


# no default


# post_shut_cmd = "<post_shut_cmd>"


# pre-archive command


# no default


# pre_arch_cmd = "<pre_arch_cmd> [$]"


# post-archive command


# no default


# post_arch_cmd = "<post_arch_cmd> [$]"


# pre-backup command


# no default


# pre_back_cmd = "<pre_back_cmd> [$]"


# post-backup command


# no default


# post_back_cmd = "<post_back_cmd> [$]"


# volume size in KB = K, MB = M or GB = G (backup device dependent)


# default: 1200M


# recommended values for tape devices without hardware compression:


# 60 m   4 mm  DAT DDS-1 tape:    1200M


# 90 m   4 mm  DAT DDS-1 tape:    1800M


# 120 m  4 mm  DAT DDS-2 tape:    3800M


# 125 m  4 mm  DAT DDS-3 tape:   11000M


# 112 m  8 mm  Video tape:        2000M


# 112 m  8 mm  high density:      4500M


# DLT 2000     10/20 GB:         10000M


# DLT 2000XT   15/30 GB:         15000M


# DLT 4000     20/40 GB:         20000M


# DLT 7000     35/70 GB:         35000M


# recommended values for tape devices with hardware compression:


# 60 m   4 mm  DAT DDS-1 tape:    1000M


# 90 m   4 mm  DAT DDS-1 tape:    1600M


# 120 m  4 mm  DAT DDS-2 tape:    3600M


# 125 m  4 mm  DAT DDS-3 tape:   10000M


# 112 m  8 mm  Video tape:        1800M


# 112 m  8 mm  high density:      4300M


# DLT 2000     10/20 GB:          9000M


# DLT 2000XT   15/30 GB:         14000M


# DLT 4000     20/40 GB:         18000M


# DLT 7000     35/70 GB:         30000M


tape_size = 100G


# volume size in KB = K, MB = M or GB = G used by brarchive


# default: value of the tape_size parameter


# tape_size_arch = 100G


# tape block size in KB for brtools as tape copy command on Windows


# default: 64


# tape_block_size = 64


# rewind and set offline for brtools as tape copy command on Windows


# yes | no


# default: yes


# tape_set_offline = yes


# level of parallel execution


# default: 0 - set to number of backup devices


exec_parallel = 0


# address of backup device without rewind


# [<dev_address> | (<dev_address_list>)]


# no default


# operating system dependent, examples:


# HP-UX:   /dev/rmt/0mn


# TRU64:   /dev/nrmt0h


# AIX:     /dev/rmt0.1


# Solaris: /dev/rmt/0mn


# Windows: /dev/nmt0


# Linux:   /dev/nst0


tape_address = /dev/nst0


# address of backup device without rewind used by brarchive


# default: value of the tape_address parameter


# operating system dependent


# tape_address_arch = /dev/nst0


# address of backup device with rewind


# [<dev_address> | (<dev_address_list>)]


# no default


# operating system dependent, examples:


# HP-UX:   /dev/rmt/0m


# TRU64:   /dev/rmt0h


# AIX:     /dev/rmt0


# Solaris: /dev/rmt/0m


# Windows: /dev/mt0


# Linux:   /dev/st0


tape_address_rew = /dev/st0


# address of backup device with rewind used by brarchive


# default: value of the tape_address_rew parameter


# operating system dependent


# tape_address_rew_arch = /dev/st0


# address of backup device with control for mount/dismount command


# [<dev_address> | (<dev_address_list>)]


# default: value of the tape_address_rew parameter


# operating system dependent


# tape_address_ctl = /dev/...


# address of backup device with control for mount/dismount command


# used by brarchive


# default: value of the tape_address_rew_arch parameter


# operating system dependent


# tape_address_ctl_arch = /dev/...


# volumes for brarchive


# [<volume_name> | (<volume_name_list>) | SCRATCH]


# no default


volume_archive = (HPRA01, HPRA02, HPRA03, HPRA04, HPRA05,


                  HPRA06, HPRA07, HPRA08, HPRA09, HPRA10,


                  HPRA11, HPRA12, HPRA13, HPRA14, HPRA15,


                  HPRA16, HPRA17, HPRA18, HPRA19, HPRA20,


                  HPRA21, HPRA22, HPRA23, HPRA24, HPRA25,


                  HPRA26, HPRA27, HPRA28, HPRA29, HPRA30)


# volumes for brbackup


# [<volume_name> | (<volume_name_list>) | SCRATCH]


# no default


volume_backup = (HPRB01, HPRB02, HPRB03, HPRB04, HPRB05,


                 HPRB06, HPRB07, HPRB08, HPRB09, HPRB10,


                 HPRB11, HPRB12, HPRB13, HPRB14, HPRB15,


                 HPRB16, HPRB17, HPRB18, HPRB19, HPRB20,


                 HPRB21, HPRB22, HPRB23, HPRB24, HPRB25,


                 HPRB26, HPRB27, HPRB28, HPRB29, HPRB30)


# expiration period in days for backup volumes


# default: 30


expir_period = 30


# recommended usages of backup volumes


# default: 100


tape_use_count = 100


# backup utility parameter file


# default: no parameter file


# null - no parameter file


util_par_file = initHPR.utl


# backup utility parameter file for volume backup


# default: no parameter file


# null - no parameter file


# util_vol_par_file = initHPR.vol


# additional options for BACKINT interface program


# no default


# "" - no additional options


# util_options = "<backint_options>"


# additional options for BACKINT volume backup type


# no default


# "" - no additional options


# util_vol_options = "<backint_options>"


# path to directory BACKINT executable will be called from


# default: sap-exe directory


# null - call BACKINT without path


# util_path = <dir>|null


# path to directory BACKINT will be called from for volume backup


# default: sap-exe directory


# null - call BACKINT without path


# util_vol_path = <dir>|null


# disk volume unit for BACKINT volume backup type


# [disk_vol | sap_data | all_data | all_dbf]


# default: sap_data


# util_vol_unit = <unit>


# additional access to files saved by BACKINT volume backup type


# [none | copy | mount | both]


# default: none


# util_vol_access = <access>


# negative file/directory list for BACKINT volume backup type


# [<file_dir_name> | (<file_dir_list>) | no_check]


# default: none


# util_vol_nlist = <nlist>


# mount/dismount command parameter file


# default: no parameter file


# mount_par_file = initHPR.mnt


# Oracle connection name to the primary database


# [primary_db = <conn_name> | LOCAL]


# no default


# primary_db = <conn_name>


# Oracle connection name to the standby database


# [standby_db = <conn_name> | LOCAL]


# no default


# standby_db = <conn_name>


# description of parallel instances for Oracle RAC


# parallel_instances = <inst_desc> | (<inst_desc_list>)


# <inst_desc_list>   - <inst_desc>[,<inst_desc>...]


# <inst_desc>        - <Oracle_sid>:<Oracle_home>@<conn_name>


# <Oracle_sid>       - Oracle system id for parallel instance


# <Oracle_home>      - Oracle home for parallel instance


# <conn_name>        - Oracle connection name to parallel instance


# Please include the local instance in the parameter definition!


# default: no parallel instances


# example for initRAC001.sap:


# parallel_instances = (RAC001:/oracle/RAC/920_64@RAC001,


# RAC002:/oracle/RAC/920_64@RAC002, RAC003:/oracle/RAC/920_64@RAC003)


# local Oracle RAC database homes [no | yes]


# default: no - shared database homes


# loc_ora_homes = yes


# handling of Oracle RAC database services [no | yes]


# default: no


# db_services = yes


# handling of Oracle RAC database services [no | yes]


# default: no


# db_services = yes


# database owner of objects to be checked


# <owner> | (<owner_list>)


# default: all SAP owners


# check_owner = SAPSR3


# database objects to be excluded from checks


# all_part | non_sap | [<owner>.]<table> | [<owner>.]<index>


# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)


# default: no exclusion, example:


# check_exclude = (SDBAH, SAPSR3.SDBAD)


# special database check conditions


# ("<type>:<cond>:<active>:<sever>:[<chkop>]:[<chkval>]:[<unit>]", ...)


# check_cond = (<cond_list>)


# database owner of SDBAH, SDBAD and XDB tables for cleanup


# <owner> | (<owner_list>)


# default: all SAP owners


# cleanup_owner = SAPSR3


# retention period in days for brarchive log files


# default: 30


# cleanup_brarchive_log = 30


# retention period in days for brbackup log files


# default: 30


# cleanup_brbackup_log = 30


# retention period in days for brconnect log files


# default: 30


# cleanup_brconnect_log = 30


# retention period in days for brrestore log files


# default: 30


# cleanup_brrestore_log = 30


# retention period in days for brrecover log files


# default: 30


# cleanup_brrecover_log = 30


# retention period in days for brspace log files


# default: 30


# cleanup_brspace_log = 30


# retention period in days for archive log files saved on disk


# default: 30


# cleanup_disk_archive = 30


# retention period in days for database files backed up on disk


# default: 30


# cleanup_disk_backup = 30


# retention period in days for brspace export dumps and scripts


# default: 30


# cleanup_exp_dump = 30


# retention period in days for Oracle trace and audit files


# default: 30


# cleanup_ora_trace = 30


# retention period in days for records in SDBAH and SDBAD tables


# default: 100


# cleanup_db_log = 100


# retention period in days for records in XDB tables


# default: 100


# cleanup_xdb_log = 100


# retention period in days for database check messages


# default: 100


# cleanup_check_msg = 100


# database owner of objects to adapt next extents


# <owner> | (<owner_list>)


# default: all SAP owners


# next_owner = SAPSR3


# database objects to adapt next extents


# all | all_ind | special | [<owner>.]<table> | [<owner>.]<index>


# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)


# default: all abjects of selected owners, example:


# next_table = (SDBAH, SAPSR3.SDBAD)


# database objects to be excluded from adapting next extents


# all_part | [<owner>.]<table> | [<owner>.]<index>


# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)


# default: no exclusion, example:


# next_exclude = (SDBAH, SAPSR3.SDBAD)


# database objects to get special next extent size


# allsel:<size>[/<limit>] | [<owner>.]<table>:<size>[/<limit>]


# | [<owner>.]<index>:<size>[/<limit>]


# | [<owner>.][<prefix>]*[<suffix>]:<size>[/<limit>]


# | (<object_size_list>)


# default: according to table category, example:


# next_special = (SDBAH:100K, SAPSR3.SDBAD:1M/200)


# maximum next extent size


# default: 2 GB - 5 * <database_block_size>


# next_max_size = 1G


# maximum number of next extents


# default: 0 - unlimited


# next_limit_count = 300


# database owner of objects to update statistics


# <owner> | (<owner_list>)


# default: all SAP owners


# stats_owner = SAPSR3


# database objects to update statistics


# all | all_ind | all_part | missing | info_cubes | dbstatc_tab


# | dbstatc_mon | dbstatc_mona | [<owner>.]<table> | [<owner>.]<index>


# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)


# | harmful | locked | system_stats | oradict_stats | oradict_tab


# default: all abjects of selected owners, example:


# stats_table = (SDBAH, SAPSR3.SDBAD)


# database objects to be excluded from updating statistics


# all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>


# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)


# default: no exclusion, example:


# stats_exclude = (SDBAH, SAPSR3.SDBAD)


# method for updating statistics for tables not in DBSTATC


# E | EH | EI | EX | C | CH | CI | CX | A | AH | AI | AX | E= | C= | =H


# | =I | =X | +H | +I


# default: according to internal rules


# stats_method = E


# sample size for updating statistics for tables not in DBSTATC


# P<percentage_of_rows> | R<thousands_of_rows>


# default: according to internal rules


# stats_sample_size = P10


# number of buckets for updating statistics with histograms


# default: 75


# stats_bucket_count = 75


# threshold for collecting statistics after checking


# <threshold> | (<threshold> [, all_part:<threshold>


# | info_cubes:<threshold> | [<owner>.]<table>:<threshold>


# | [<owner>.][<prefix>]*[<suffix>]:<threshold>


# | <tablespace>:<threshold> | <object_list>])


# default: 50%


# stats_change_threshold = 50


# number of parallel threads for updating statistics


# default: 1


# stats_parallel_degree = 1


# processing time limit in minutes for updating statistics


# default: 0 - no limit


# stats_limit_time = 0


# parameters for calling DBMS_STATS supplied package


# all:R|B|H|G[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D


# | all_part:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D


# | info_cubes:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D


# | [<owner>.]<table>:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D


# | [<owner>.][<prefix>]*[<suffix>]:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0


# |<degree>|A|D | (<object_list>) | NO


# R|B - sampling method:


# 'R' - row sampling, 'B' - block sampling,


# 'H' - histograms by row sampling, 'G' - histograms by block sampling


# [<buckets>|A|S|R|D] - buckets count:


# <buckets> - histogram buckets count, 'A' - auto buckets count,


# 'S' - skew-only, 'R' - repeat, 'D' - default buckets count (75)


# [A|I|P|X|D] - columns with histograms:


# 'A' - all columns, 'I' - indexed columns, 'P' - partition columns,


# 'X' - indexed and partition columns, 'D' - default columns


# 0|<degree>|A|D - parallel degree:


# '0' - default table degree, <degree> - dbms_stats parallel degree,


# 'A' - dbms_stats auto degree, 'D' - default Oracle degree


# default: ALL:R:0


# stats_dbms_stats = ([ALL:R:1,][<owner>.]<table>:R:<degree>,...)


# definition of info cube tables


# default | rsnspace_tab | [<owner>.]<table>


# | [<owner>.][<prefix>]*[<suffix>] | (<object_list>) | null


# default: rsnspace_tab


# stats_info_cubes = (/BIC/D*, /BI0/D*, ...)


# special statistics settings


# (<table>:[<owner>]:<active>:[<method>]:[<sample>], ...)


# stats_special = (<special_list>)


# update cycle in days for dictionary statistics within standard runs


# default: 0 - no update


# stats_dict_cycle = 100


# method for updating Oracle dictionary statistics


# C - compute | E - estimate | A - auto sample size


# default: C


# stats_dict_method = C


# sample size for updating dictionary statistics (stats_dict_method = E)


# <percent> (1-100)


# default: auto sample size


# stats_dict_sample = 10


# parallel degree for updating dictionary statistics


# auto | default | null | <degree> (1-256)


# default: Oracle default


# stats_dict_degree = 4


# update cycle in days for system statistics within standard runs


# default: 0 - no update


# stats_system_cycle = 100


# interval for updating Oracle system statistics


# 0 - NOWORKLOAD, >0 - interval in minutes


# default: 0


# stats_system_interval = 0


# database objects to be excluded from validating structure


# null | all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>


# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)


# default: value of the stats_exclude parameter, example:


# valid_exclude = (SDBAH, SAPSR3.SDBAD)


# recovery type [complete | dbpit | tspit | reset | restore | apply


# | disaster]


# default: complete


# recov_type = complete


# directory for brrecover file copies


# default: $SAPDATA_HOME/sapbackup


# recov_copy_dir = /oracle/HPR/sapbackup


# time period in days for searching for backups


# 0 - all available backups, >0 - backups from n last days


# default: 30


# recov_interval = 30


# degree of paralelism for applying archive log files


# 0 - use Oracle default parallelism, 1 - serial, >1 - parallel


# default: Oracle default


# recov_degree = 0


# number of lines for scrolling in list menus


# 0 - no scrolling, >0 - scroll n lines


# default: 20


# scroll_lines = 20


# time period in days for displaying profiles and logs


# 0 - all available logs, >0 - logs from n last days


# default: 30


# show_period = 30


# directory for brspace file copies


# default: $SAPDATA_HOME/sapreorg


# space_copy_dir = /oracle/HPR/sapreorg


# directory for table export dump files


# default: $SAPDATA_HOME/sapreorg


# exp_dump_dir = /oracle/HPR/sapreorg


# database tables for reorganization


# [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]


# | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)


# no default


# reorg_table = (SDBAH, SAPSR3.SDBAD)


# table partitions for reorganization


# [[<owner>.]<table>.]<partition>


# | [[<owner>.]<table>.][<prefix>]%[<suffix>]


# | [[<owner>.]<table>.][<prefix>]*[<suffix>] | (<tabpart_list>)


# no default


# reorg_tabpart = (PART1, PARTTAB1.PART2, SAPSR3.PARTTAB2.PART3)


# database indexes for rebuild


# [<owner>.]<index> | [<owner>.][<prefix>]*[<suffix>]


# | [<owner>.][<prefix>]%[<suffix>] | (<index_list>)


# no default


# rebuild_index = (SDBAH~0, SAPSR3.SDBAD~0)


# index partitions for rebuild


# [[<owner>.]<index>.]<partition>


# | [[<owner>.]<index>.][<prefix>]%[<suffix>]


# | [[<owner>.]<index>.][<prefix>]*[<suffix>] | (<indpart_list>)


# no default


# rebuild_indpart = (PART1, PARTIND1.PART2, SAPSR3.PARTIND2.PART3)


# database tables for export


# [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]


# | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)


# no default


# exp_table = (SDBAH, SAPSR3.SDBAD)


# database tables for import


# <table> | (<table_list>)


# no default


# do not specify table owner in the list - use -o|-owner option for this


# imp_table = (SDBAH, SDBAD)


# Oracle system id of ASM instance


# default: +ASM


# asm_ora_sid = <asm_inst> | (<db_inst1>:<asm_inst1>,


# <db_inst2>:<asm_inst2>, <db_inst3>:<asm_inst3>, ...)


# asm_ora_sid = (RAC001:+ASM1, RAC002:+ASM2, RAC003:+ASM3, RAC004:+ASM4)


# asm_ora_sid = +ASM


# Oracle home of ASM instance


# no default


# asm_ora_home = <asm_home> | (<db_inst1>:<asm_home1>,


# <db_inst2>:<asm_home2>, <db_inst3>:<asm_home3>, ...)


# asm_ora_home = (RAC001:/oracle/GRID/11202, RAC002:/oracle/GRID/11202,


# RAC003:/oracle/GRID/11202, RAC004:/oracle/GRID/11202)


# asm_ora_home = /oracle/GRID/11202


# Oracle ASM root directory name


# default: ASM


# asm_root_dir = <asm_root>


# asm_root_dir = ASM

initSID.utl

OB2BARTYPE = SAP


OB2BARLIST = DB13_HSILPDRDB_SVC;


OB2APPNAME = HPR;


OB2BARHOSTNAME = hsilpdrdb_svc;


compress = yes


Regards,

Vijay.K

former_member188883
Active Contributor
0 Kudos

Hi Vijay,

Please attach initHPR.utl  file


Hope this helps.


Regards

Deepak Kori

Former Member
0 Kudos

Dear Deepak ,

The below parameters only having the UTL file

initSID.utl

OB2BARTYPE = SAP


OB2BARLIST = DB13_HSILPDRDB_SVC;


OB2APPNAME = HPR;


OB2BARHOSTNAME = hsilpdrdb_svc;


compress = yes

Thanks & Regards,

Vijay.K

former_member182657
Active Contributor
0 Kudos

Hi Vijay,

Could you please share initSID.sap and i suggest you to involve the storage specialist to identify the issue as it relates configuration part for archive log storage at data protector end.

One thing more you need to check is to check your database size and tape storage capacity,means as you are performing complete DB backup with redo backup,some times due to lack of space at tape end it starts to search other available drive.Try to investigate it first.

Thanks,

Gaurav