cancel
Showing results for 
Search instead for 
Did you mean: 

SAP Backup DB13 Fail

Former Member
0 Kudos

Dear All;

I have a SAP ERP 6.0 system ABAP stack installed on windoes server 2008, with Oracle 11.g database and it working fine.

I scehduled successfuly the required online, and offline backup via the transaction code DB13 as required.

The server that I want to use as a backup server is also windows server 2008, and it is located on the same network where my SAP ERP server exist.

After that I went to the file initSID.sap, and changed the following:

1- backup_dev_type = disk

2- backup_root_dir = IP of the backup server/E/Amcan_APR_Backups

3- compress = yes

but the issue i am facing is that when I go to the backup directory under D:\oracle\SID\sapbackup i find a file for the backup I started in DB13, but it terminates with errors:

BR0208I Volume with name APRB01 required in device /dev/nmt0

BR0280I BRBACKUP time stamp: 2013-04-01 16.55.03

BR0226I Rewinding tape volume in device /dev/mt0 ...

BR0278E Command output of 'D:\usr\sap\APR\SYS\exe\uc\NTAMD64\brtools.exe -f mtrew /dev/mt0 && D:\usr\sap\APR\SYS\exe\uc\NTAMD64\brtools.exe -f mtstp /dev/nmt0 64':

BR0252E Function CreateFile() failed for '/dev/mt0' at location BrTapeControl-1

BR0253E errno 2: The system cannot find the file specified.

BR0280I BRBACKUP time stamp: 2013-04-01 16.55.04

BR0279E Return code from 'D:\usr\sap\APR\SYS\exe\uc\NTAMD64\brtools.exe -f mtrew /dev/mt0 && D:\usr\sap\APR\SYS\exe\uc\NTAMD64\brtools.exe -f mtstp /dev/nmt0 64': 5

BR0213E Winding tape volume in device /dev/nmt0 failed

BR0056I End of database backup: bekwqnvh.ant 2013-04-01 16.55.04

BR0280I BRBACKUP time stamp: 2013-04-01 16.55.04

BR0054I BRBACKUP terminated with errors

How can I specify the parameter backup_root_dir correctly mentioning that I need the location of the backup to be on the other windows server?

Best Regards

~Amal


Accepted Solutions (1)

Accepted Solutions (1)

former_member188883
Active Contributor
0 Kudos

Hi Amal,

R0278E Command output of 'D:\usr\sap\APR\SYS\exe\uc\NTAMD64\brtools.exe -f mtrew /dev/mt0 && D:\usr\sap\APR\SYS\exe\uc\NTAMD64\brtools.exe -f mtstp /dev/nmt0 64':

BR0252E Function CreateFile() failed for '/dev/mt0' at location BrTapeControl-1

BR0253E errno 2: The system cannot find the file specified.

BR0280I BRBACKUP time stamp: 2013-04-01 16.55.04

BR0279E Return code from 'D:\usr\sap\APR\SYS\exe\uc\NTAMD64\brtools.exe -f mtrew /dev/mt0 && D:\usr\sap\APR\SYS\exe\uc\NTAMD64\brtools.exe -f mtstp /dev/nmt0 64': 5

BR0213E Winding tape volume in device /dev/nmt0 failed

As per the logs, brtools stills looks for tape device.

Please ensure there are no information defined for tape. May be you can hash out the entries wherever device type tape has been mentioned.

Later try the backup with db13 and post the results.

Regards,

Deepak Kori

Former Member
0 Kudos

Dear Depak;

I commented the parameters that include tape in them, and reexecuted the online backup, but got the following errors:

The content of initSID.sap is the following:

-------------------------------------------------------------------------------------------------------------------------------------------------

# @(#) $Id: //bas/720_REL/src/ccm/rsbr/initNT.sap#4 $ SAP

########################################################################

#                                                                      #

# SAP BR*Tools sample profile.                                         #

# The parameter syntax is the same as for init.ora parameters.         #

# Enclose parameter values which consist of more than one symbol in    #

# double quotes.                                                       #

# After any symbol, parameter definition can be continued on the next  #

# line.                                                                #

# A parameter value list should be enclosed in parentheses, the list   #

# items should be delimited by commas.                                 #

# There can be any number of white spaces (blanks, tabs and new lines) #

# between symbols in parameter definition.                             #

# Comment lines must start with a hash character.                      #

#                                                                      #

########################################################################

# backup mode [all | all_data | full | incr | sap_dir | ora_dir

# | all_dir | <tablespace_name> | <file_id> | <file_id1>-<file_id2>

# | <generic_path> | (<object_list>)]

# default: all

backup_mode = all

# restore mode [all | all_data | full | incr | incr_only | incr_full

# | incr_all | <tablespace_name> | <file_id> | <file_id1>-<file_id2>

# | <generic_path> | (<object_list>) | partial | non_db

# redirection with '=' is not supported here - use option '-m' instead

# default: all

restore_mode = all

# backup type [offline | offline_force | offline_standby | offline_split

# | offline_mirror | offline_stop | online | online_cons | online_split

# | online_mirror | online_standby | offstby_split | offstby_mirror

# default: offline

backup_type = online

#backup_type = offline

# backup device type

# [tape | tape_auto | tape_box | pipe | pipe_auto | pipe_box | disk

# | disk_copy | disk_standby | stage | stage_copy | stage_standby

# | util_file | util_file_online | util_vol | util_vol_online

# | rman_util | rman_disk | rman_stage | rman_prep]

# default: tape

backup_dev_type = disk

# backup root directory [<path_name> | (<path_name_list>)]

# default: %SAPDATA_HOME%\sapbackup

backup_root_dir = E:\Amcan_APR_Backups

#backup_root_dir = D:\oracle\APR\sapbackup

# stage root directory [<path_name> | (<path_name_list>)]

# default: value of the backup_root_dir parameter

stage_root_dir = D:\oracle\APR\sapbackup

# compression flag [no | yes | hardware | only | brtools]

# default: no

compress = yes

# compress command

# first $-character is replaced by the source file name

# second $-character is replaced by the target file name

# <target_file_name> = <source_file_name>.Z

# for compress command the -c option must be set

# recommended setting for brbackup -k only run:

# "%SAPEXE%\mkszip -l 0 -c $ > $"

# no default

compress_cmd = "D:\usr\sap\APR\SYS\exe\uc\NTAMD64\mkszip -c $ > $"

# uncompress command

# first $-character is replaced by the source file name

# second $-character is replaced by the target file name

# <source_file_name> = <target_file_name>.Z

# for uncompress command the -c option must be set

# no default

# uncompress_cmd = "D:\usr\sap\APR\SYS\exe\uc\NTAMD64\uncompress -c $ > $"

# directory for compression [<path_name> | (<path_name_list>)]

# default: value of the backup_root_dir parameter

compress_dir = D:\oracle\APR\sapreorg

# brarchive function [save | second_copy | double_save | save_delete

# | second_copy_delete | double_save_delete | copy_save

# | copy_delete_save | delete_saved | delete_copied]

# default: save

archive_function = save

# directory for archive log copies to disk

# default: first value of the backup_root_dir parameter

archive_copy_dir = D:\oracle\APR\sapbackup

# directory for archive log copies to stage

# default: first value of the stage_root_dir parameter

archive_stage_dir = D:\oracle\APR\sapbackup

# delete archive logs from duplex destination [only | no | yes | check]

# default: only

# archive_dupl_del = only

# new sapdata home directory for disk_copy | disk_standby

# no default

# new_db_home = X:\oracle\C11

# stage sapdata home directory for stage_copy | stage_standby

# default: value of the new_db_home parameter

# stage_db_home = /oracle/C11

# original sapdata home directory for split mirror disk backup

# no default

# orig_db_home = /oracle/C11

# remote host name

# no default

# remote_host = <host_name>

# remote user name

# default: current operating system user

# remote_user = <user_name>

# tape copy command [cpio | cpio_gnu | dd | dd_gnu | rman | rman_gnu

# | rman_dd | rman_dd_gnu | brtools | rman_brt]

# default: cpio

tape_copy_cmd = brtools

# disk copy command [copy | copy_gnu | dd | dd_gnu | rman | rman_gnu

# | rman_set | rman_set_gnu | ocopy]

# default: copy

disk_copy_cmd = copy

# stage copy command [rcp | scp | ftp]

# default: rcp

stage_copy_cmd = rcp

# pipe copy command [rsh | ssh]

# default: rsh

pipe_copy_cmd = rsh

# flags for cpio output command

# default: -ovB

cpio_flags = -ovB

# flags for cpio input command

# default: -iuvB

cpio_in_flags = -iuvB

# flags for cpio command for copy of directories to disk

# default: -pdcu

# use flags -pdu for gnu tools

cpio_disk_flags = -pdcu

# flags for dd output command

# default: "obs=16k"

# caution: option "obs=" not supported for Windows

# recommended setting:

# Unix:    "obs=nk bs=nk", example: "obs=64k bs=64k"

# Windows: "bs=nk",        example: "bs=64k"

dd_flags = "bs=64k"

# flags for dd input command

# default: "ibs=16k"

# caution: option "ibs=" not supported for Windows

# recommended setting:

# Unix:    "ibs=nk bs=nk", example: "ibs=64k bs=64k"

# Windows: "bs=nk",        example: "bs=64k"

dd_in_flags = "bs=64k"

# number of members in RMAN save sets [ 1 | 2 | 3 | 4 | tsp | all ]

# default: 1

saveset_members = 1

# additional parameters for RMAN

# following parameters are relevant only for rman_util, rman_disk or

# rman_stage: rman_channels, rman_filesperset, rman_maxsetsize,

# rman_pool, rman_copies, rman_proxy, rman_parms, rman_send

# rman_maxpiecesize can be used to split an incremental backup saveset

# into multiple pieces

# rman_channels defines the number of parallel sbt channel allocations

# rman_filesperset = 0 means:

# one file per save set - for non-incremental backups

# up to 64 files in one save set - for incremental backups

# the others have the same meaning as for native RMAN

# rman_channels = 1

# rman_filesperset = 0

# rman_maxopenfiles = 0

# rman_maxsetsize = 0      # n[K|M] in KB (default) or in MB

# rman_maxpiecesize = 0    # n[K|M] in KB (default) or in MB

# rman_rate = 0            # n[K|M] in KB (default) or in MB

# rman_diskratio = 0

# rman_pool = 0

# rman_copies = 0 | 1 | 2 | 3 | 4

# rman_proxy = no | yes | only

# rman_parms = "BLKSIZE=65536 ENV=(BACKUP_SERVER=HOSTNAME)"

# rman_send = "'<command>'"

# rman_send = ("channel sbt_1 '<command1>' parms='<parameters1>'",

#              "channel sbt_2 '<command2>' parms='<parameters2>'")

# rman_compress = no | yes

# rman_maxcorrupt = (<dbf_name>|<dbf_id>:<corr_cnt>, ...)

# rman_cross_check = none | archive | arch_force

# remote copy-out command (backup_dev_type = pipe)

# $-character is replaced by current device address

# no default

copy_out_cmd = "dd ibs=8k obs=64k of=$"

# remote copy-in command (backup_dev_type = pipe)

# $-character is replaced by current device address

# no default

copy_in_cmd = "dd ibs=64k obs=8k if=$"

# rewind command

# $-character is replaced by current device address

# no default

# operating system dependent, examples:

# HP-UX:   "mt -f $ rew"

# TRU64:   "mt -f $ rewind"

# AIX:     "tctl -f $ rewind"

# Solaris: "mt -f $ rewind"

# Windows: "mt -f $ rewind"

# Linux:   "mt -f $ rewind"

rewind = "mt -f $ rewind"

# rewind and set offline command

# $-character is replaced by current device address

# default: value of the rewind parameter

# operating system dependent, examples:

# HP-UX:   "mt -f $ offl"

# TRU64:   "mt -f $ offline"

# AIX:     "tctl -f $ offline"

# Solaris: "mt -f $ offline"

# Windows: "mt -f $ offline"

# Linux:   "mt -f $ offline"

rewind_offline = "mt -f $ offline"

# tape positioning command

# first $-character is replaced by current device address

# second $-character is replaced by number of files to be skipped

# no default

# operating system dependent, examples:

# HP-UX:   "mt -f $ fsf $"

# TRU64:   "mt -f $ fsf $"

# AIX:     "tctl -f $ fsf $"

# Solaris: "mt -f $ fsf $"

# Windows: "mt -f $ fsf $"

# Linux:   "mt -f $ fsf $"

tape_pos_cmd = "mt -f $ fsf $"

# mount backup volume command in auto loader / juke box

# used if backup_dev_type = tape_box | pipe_box

# no default

# mount_cmd = "<mount_cmd> $ $ $ [$]"

# dismount backup volume command in auto loader / juke box

# used if backup_dev_type = tape_box | pipe_box

# no default

# dismount_cmd = "<dismount_cmd> $ $ [$]"

# split mirror disks command

# used if backup_type = offline_split | online_split | offline_mirror

# | online_mirror

# no default

# split_cmd = "<split_cmd> [$]"

# resynchronize mirror disks command

# used if backup_type = offline_split | online_split | offline_mirror

# | online_mirror

# no default

# resync_cmd = "<resync_cmd> [$]"

# additional options for SPLITINT interface program

# no default

# split_options = "<split_options>"

# resynchronize after backup flag [no | yes]

# default: no

# split_resync = no

# pre-split command

# no default

# pre_split_cmd = "<pre_split_cmd>"

# post-split command

# no default

# post_split_cmd = "<post_split_cmd>"

# pre-shut command

# no default

# pre_shut_cmd = "<pre_shut_cmd>"

# post-shut command

# no default

# post_shut_cmd = "<post_shut_cmd>"

# volume size in KB = K, MB = M or GB = G (backup device dependent)

# default: 1200M

# recommended values for tape devices without hardware compression:

# 60 m   4 mm  DAT DDS-1 tape:    1200M

# 90 m   4 mm  DAT DDS-1 tape:    1800M

# 120 m  4 mm  DAT DDS-2 tape:    3800M

# 125 m  4 mm  DAT DDS-3 tape:   11000M

# 112 m  8 mm  Video tape:        2000M

# 112 m  8 mm  high density:      4500M

# DLT 2000     10/20 GB:         10000M

# DLT 2000XT   15/30 GB:         15000M

# DLT 4000     20/40 GB:         20000M

# DLT 7000     35/70 GB:         35000M

# recommended values for tape devices with hardware compression:

# 60 m   4 mm  DAT DDS-1 tape:    1000M

# 90 m   4 mm  DAT DDS-1 tape:    1600M

# 120 m  4 mm  DAT DDS-2 tape:    3600M

# 125 m  4 mm  DAT DDS-3 tape:   10000M

# 112 m  8 mm  Video tape:        1800M

# 112 m  8 mm  high density:      4300M

# DLT 2000     10/20 GB:          9000M

# DLT 2000XT   15/30 GB:         14000M

# DLT 4000     20/40 GB:         18000M

# DLT 7000     35/70 GB:         30000M

tape_size = 100G

# volume size in KB = K, MB = M or GB = G used by brarchive

# default: value of the tape_size parameter

# tape_size_arch = 100G

# tape block size in KB for brtools as tape copy command on Windows

# default: 64

tape_block_size = 64

# rewind and set offline for brtools as tape copy command on Windows

# yes | no

# default: yes

tape_set_offline = yes

# level of parallel execution

# default: 0 - set to number of backup devices

exec_parallel = 0

# address of backup device without rewind

# [<dev_address> | (<dev_address_list>)]

# no default

# operating system dependent, examples:

# HP-UX:   /dev/rmt/0mn

# TRU64:   /dev/nrmt0h

# AIX:     /dev/rmt0.1

# Solaris: /dev/rmt/0mn

# Windows: /dev/nmt0 | /dev/nst0

# Linux:   /dev/nst0

tape_address = /dev/nmt0

# address of backup device without rewind used by brarchive

# default: value of the tape_address parameter

# operating system dependent

# tape_address_arch = /dev/nmt0

# address of backup device with rewind

# [<dev_address> | (<dev_address_list>)]

# no default

# operating system dependent, examples:

# HP-UX:   /dev/rmt/0m

# TRU64:   /dev/rmt0h

# AIX:     /dev/rmt0

# Solaris: /dev/rmt/0m

# Windows: /dev/mt0 | /dev/st0

# Linux:   /dev/st0

tape_address_rew = /dev/mt0

# address of backup device with rewind used by brarchive

# default: value of the tape_address_rew parameter

# operating system dependent

# tape_address_rew_arch = /dev/mt0

# address of backup device with control for mount/dismount command

# [<dev_address> | (<dev_address_list>)]

# default: value of the tape_address_rew parameter

# operating system dependent

# tape_address_ctl = /dev/...

# address of backup device with control for mount/dismount command

# used by brarchive

# default: value of the tape_address_rew_arch parameter

# operating system dependent

# tape_address_ctl_arch = /dev/...

# volumes for brarchive

# [<volume_name> | (<volume_name_list>) | SCRATCH]

# no default

volume_archive = (APRA01, APRA02, APRA03, APRA04, APRA05,

                  APRA06, APRA07, APRA08, APRA09, APRA10,

                  APRA11, APRA12, APRA13, APRA14, APRA15,

                  APRA16, APRA17, APRA18, APRA19, APRA20,

                  APRA21, APRA22, APRA23, APRA24, APRA25,

                  APRA26, APRA27, APRA28, APRA29, APRA30)

# volumes for brbackup

# [<volume_name> | (<volume_name_list>) | SCRATCH]

# no default

volume_backup = (APRB01, APRB02, APRB03, APRB04, APRB05,

                 APRB06, APRB07, APRB08, APRB09, APRB10,

                 APRB11, APRB12, APRB13, APRB14, APRB15,

                 APRB16, APRB17, APRB18, APRB19, APRB20,

                 APRB21, APRB22, APRB23, APRB24, APRB25,

                 APRB26, APRB27, APRB28, APRB29, APRB30)

# expiration period in days for backup volumes

# default: 30

expir_period = 30

# recommended usages of backup volumes

# default: 100

#tape_use_count = 100

# backup utility parameter file

# default: no parameter file

# util_par_file = initAPR.utl

# additional options for BACKINT interface program

# no default

# util_options = "<backint_options>"

# path to directory BACKINT executable will be called from

# default: sap-exe directory

# util_path = <dir>

# disk volume unit for BACKINT volume backup type

# [disk_vol | sap_data | all_data | all_dbf]

# default: sap_data

# util_vol_unit = <unit>

# additional access to files saved by BACKINT volume backup type

# [none | copy | mount | both]

# default: none

# util_vol_access = <access>

# negative file/directory list for BACKINT volume backup type

# [<file_dir_name> | (<file_dir_list>) | no_check]

# default: none

# util_vol_nlist = <nlist>

# mount/dismount command parameter file

# default: no parameter file

# mount_par_file = initAPR.mnt

# Oracle connection name to the primary database

# [primary_db = <conn_name> | LOCAL]

# no default

# primary_db = <conn_name>

# Oracle connection name to the standby database

# [standby_db = <conn_name> | LOCAL]

# no default

# standby_db = <conn_name>

# description of parallel instances for Oracle RAC

# parallel_instances = <inst_desc> | (<inst_desc_list>)

# <inst_desc_list>   - <inst_desc>[,<inst_desc>...]

# <inst_desc>        - <Oracle_sid>:<Oracle_home>@<conn_name>

# <Oracle_sid>       - Oracle system id for parallel instance

# <Oracle_home>      - Oracle home for parallel instance

# <conn_name>        - Oracle connection name to parallel instance

# Please include the local instance in the parameter definition!

# default: no parallel instances

# example for initRAC001.sap:

# parallel_instances = (RAC001:/oracle/RAC/920_64@RAC001,

# RAC002:/oracle/RAC/920_64@RAC002, RAC003:/oracle/RAC/920_64@RAC003)

# handling of Oracle RAC database services [no | yes]

# default: no

# db_services = yes

# database owner of objects to be checked

# <owner> | (<owner_list>)

# default: all SAP owners

# check_owner = sapr3

# database objects to be excluded from checks

# all_part | non_sap | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# check_exclude = (SDBAH, SAPR3.SDBAD)

# special database check conditions

# ("<type>:<cond>:<active>:<sever>:[<chkop>]:[<chkval>]:[<unit>]", ...)

# check_cond = (<cond_list>)

# database owner of SDBAH, SDBAD and XDB tables for cleanup

# <owner> | (<owner_list>)

# default: all SAP owners

# cleanup_owner = sapr3

# retention period in days for brarchive log files

# default: 30

# cleanup_brarchive_log = 30

# retention period in days for brbackup log files

# default: 30

# cleanup_brbackup_log = 30

# retention period in days for brconnect log files

# default: 30

# cleanup_brconnect_log = 30

# retention period in days for brrestore log files

# default: 30

# cleanup_brrestore_log = 30

# retention period in days for brrecover log files

# default: 30

# cleanup_brrecover_log = 30

# retention period in days for brspace log files

# default: 30

# cleanup_brspace_log = 30

# retention period in days for archive log files saved on disk

# default: 30

# cleanup_disk_archive = 30

# retention period in days for database files backed up on disk

# default: 30

# cleanup_disk_backup = 30

# retention period in days for brspace export dumps and scripts

# default: 30

# cleanup_exp_dump = 30

# retention period in days for Oracle trace and audit files

# default: 30

# cleanup_ora_trace = 30

# retention period in days for records in SDBAH and SDBAD tables

# default: 100

# cleanup_db_log = 100

# retention period in days for records in XDB tables

# default: 100

# cleanup_xdb_log = 100

# retention period in days for database check messages

# default: 100

# cleanup_check_msg = 100

# database owner of objects to adapt next extents

# <owner> | (<owner_list>)

# default: all SAP owners

# next_owner = sapr3

# database objects to adapt next extents

# all | all_ind | special | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: all abjects of selected owners, example:

# next_table = (SDBAH, SAPR3.SDBAD)

# database objects to be excluded from adapting next extents

# all_part | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# next_exclude = (SDBAH, SAPR3.SDBAD)

# database objects to get special next extent size

# all_sel:<size>[/<limit>] | [<owner>.]<table>:<size>[/<limit>]

# | [<owner>.]<index>:<size>[/<limit>]

# | [<owner>.][<prefix>]*[<suffix>]:<size>[/<limit>]

# | (<object_size_list>)

# default: according to table category, example:

# next_special = (SDBAH:100K, SAPR3.SDBAD:1M/200)

# maximum next extent size

# default: 2 GB - 5 * <database_block_size>

# next_max_size = 1G

# maximum number of next extents

# default: 0 - unlimited

# next_limit_count = 300

# database owner of objects to update statistics

# <owner> | (<owner_list>)

# default: all SAP owners

# stats_owner = sapr3

# database objects to update statistics

# all | all_ind | all_part | missing | info_cubes | dbstatc_tab

# | dbstatc_mon | dbstatc_mona | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# | harmful | locked | system_stats | oradict_stats | oradict_tab

# default: all abjects of selected owners, example:

# stats_table = (SDBAH, SAPR3.SDBAD)

# database objects to be excluded from updating statistics

# all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# stats_exclude = (SDBAH, SAPR3.SDBAD)

# method for updating statistics for tables not in DBSTATC

# E | EH | EI | EX | C | CH | CI | CX | A | AH | AI | AX | E= | C= | =H

# | =I | =X | +H | +I

# default: according to internal rules

# stats_method = E

# sample size for updating statistics for tables not in DBSTATC

# P<percentage_of_rows> | R<thousands_of_rows>

# default: according to internal rules

# stats_sample_size = P10

# number of buckets for updating statistics with histograms

# default: 75

# stats_bucket_count = 75

# threshold for collecting statistics after checking

# <threshold> | (<threshold> [, all_part:<threshold>

# | info_cubes:<threshold> | [<owner>.]<table>:<threshold>

# | [<owner>.][<prefix>]*[<suffix>]:<threshold>

# | <tablespace>:<threshold> | <object_list>])

# default: 50%

# stats_change_threshold = 50

# number of parallel threads for updating statistics

# default: 1

# stats_parallel_degree = 1

# processing time limit in minutes for updating statistics

# default: 0 - no limit

# stats_limit_time = 0

# parameters for calling DBMS_STATS supplied package

# all:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | all_part:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | info_cubes:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | [<owner>.]<table>:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | [<owner>.][<prefix>]*[<suffix>]:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0

# |<degree>|A|D | (<object_list>) | NO

# R|B - sampling method:

# 'R' - row sampling, 'B' - block sampling

# [<buckets>|A|S|R|D] - buckets count:

# <buckets> - histogram buckets count, 'A' - auto buckets count,

# 'S' - skew-only, 'R' - repeat, 'D' - default buckets count (75)

# [A|I|P|X|D] - columns with histograms:

# 'A' - all columns, 'I' - indexed columns, 'P' - partition columns,

# 'X' - indexed and partition columns, 'D' - default columns

# 0|<degree>|A|D - parallel degree:

# '0' - default table degree, <degree> - dbms_stats parallel degree,

# 'A' - dbms_stats auto degree, 'D' - default Oracle degree

# default: ALL:R:0

# stats_dbms_stats = ([ALL:R:1,][<owner>.]<table>:R:<degree>,...)

# definition of info cube tables

# default | rsnspace_tab | [<owner>.]<table>

# | [<owner>.][<prefix>]*[<suffix>] | (<object_list>) | null

# default: rsnspace_tab

# stats_info_cubes = (/BIC/D*, /BI0/D*, ...)

# special statistics settings

# (<table>:[<owner>]:<active>:[<method>]:[<sample>], ...)

# stats_special = (<special_list>)

# update cycle in days for dictionary statistics within standard runs

# default: 0 - no update

# stats_dict_cycle = 100

# method for updating Oracle dictionary statistics

# C - compute | E - estimate | A - auto sample size

# default: C

# stats_dict_method = C

# sample size for updating dictionary statistics (stats_dict_method = E)

# <percent> (1-100)

# default: auto sample size

# stats_dict_sample = 10

# parallel degree for updating dictionary statistics

# auto | default | null | <degree> (1-256)

# default: Oracle default

# stats_dict_degree = 4

# update cycle in days for system statistics within standard runs

# default: 0 - no update

# stats_system_cycle = 100

# interval for updating Oracle system statistics

# 0 - NOWORKLOAD, >0 - interval in minutes

# default: 0

# stats_system_interval = 0

# database objects to be excluded from validating structure

# null | all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: value of the stats_exclude parameter, example:

# valid_exclude = (SDBAH, SAPR3.SDBAD)

# recovery type [complete | dbpit | tspit | reset | restore | apply

# | disaster]

# default: complete

# recov_type = complete

# directory for brrecover file copies

# default: $SAPDATA_HOME/sapbackup

# recov_copy_dir = D:\oracle\APR\sapbackup

# time period in days for searching for backups

# 0 - all available backups, >0 - backups from n last days

# default: 30

# recov_interval = 30

# degree of paralelism for applying archive log files

# 0 - use Oracle default parallelism, 1 - serial, >1 - parallel

# default: Oracle default

# recov_degree = 0

# number of lines for scrolling in list menus

# 0 - no scrolling, >0 - scroll n lines

# default: 20

# scroll_lines = 20

# time period in days for displaying profiles and logs

# 0 - all available logs, >0 - logs from n last days

# default: 30

# show_period = 30

# directory for brspace file copies

# default: $SAPDATA_HOME/sapreorg

# space_copy_dir = D:\oracle\APR\sapreorg

# directory for table export dump files

# default: $SAPDATA_HOME/sapreorg

# exp_dump_dir = D:\oracle\APR\sapreorg

# database tables for reorganization

# [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)

# no default

# reorg_table = (SDBAH, SAPR3.SDBAD)

# database indexes for rebuild

# [<owner>.]<index> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<index_list>)

# no default

# rebuild_index = (SDBAH~0, SAPR3.SDBAD~0)

# database tables for export

# [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)

# no default

# exp_table = (SDBAH, SAPR3.SDBAD)

# database tables for import

# <table> | (<table_list>)

# no default

# do not specify table owner in the list - use -o|-owner option for this

# imp_table = (SDBAH, SDBAD)

-------------------------------------------------------------------------------------------------------------------------------------------------

I added the drive E for my backup server as a Map Network Drive to my SAP Server, but still can't detect it.

Best Regards

~Amal

former_member188883
Active Contributor
0 Kudos

Hi Amal,

Change the following in initSID.sap

OLD VALUE

backup_root_dir = E:\Amcan_APR_Backups

NEW VALUE

backup_root_dir = \\<ip address of backup sever\Amcan_APR_Backups

Ensure folder Amcan_APR_Backups is shared and has full permission to everyone

Hope this helps.

Regards,

Deepak Kori

former_member206552
Active Contributor
0 Kudos

hi amal

Give FULL control rights to SAPSERVICE(SID) to your " E:\Amcan_APR_Backups"  .

If the sapbackup location is on another server then you have to create the user sapservice<SID> on that server & make sure that the password is same as of sap server.

When you execute the backup through DB13 the user SAPSERVICE(SID) is been use to store the backup.

http://scn.sap.com/message/6320532

best regards

marius

former_member188883
Active Contributor
0 Kudos

Hi Amal,

Is this problem resolved.

Regards,

Deepak Kori

Former Member
0 Kudos

Dear Deepak;

No not yet.

Best Regards

~Amal

former_member188883
Active Contributor
0 Kudos

Hi Amal,

Have you implemented the solution proposed. If yes, provide current status of error.

Regards,

Deepak Kori

Former Member
0 Kudos

Dear Deepak;

I am able now to finish a full backup successfully, and I can check that when I go to DB12, and it confirms for me that my last successfull backup was in 05.04.2013 15:00:19 which is good, but this is only when I run it at the same server where my sap system is installed.

I want to make the location of the backup on another backup server on the same network where my sap server exists, but still the location for backup_root_dir is unable to be detected for the other server although I mapped the drive for my backup server on the same sap server.

Best Regards

~Amal

former_member188883
Active Contributor
0 Kudos

Hi Amal,

As described in my response,

Change the following in initSID.sap

OLD VALUE

backup_root_dir = E:\Amcan_APR_Backups

NEW VALUE

backup_root_dir = \\<ip address of backup sever\Amcan_APR_Backups

Ensure folder Amcan_APR_Backups is shared and has full permission to everyone.


Request you to check this and post the results.

In Windows environment we need to specify UNC name instead of Mapped Drive name to take backup on remote server.

Regards,

Deepak Kori

former_member188883
Active Contributor
0 Kudos

Hi Amal,

Could you check and update on this.

Regards,

Deepak Kori

Former Member
0 Kudos

Dear Deepak;

Thank you for the big help you provided me so far.

I solved the issue with my backup by doing the following steps:

1) I went to my ERP Server, and made a ping test to the backup server to make sure that both servers are able to see each other.

2) I turned the firewall on my SAP ERP server, and on the backup server OFF

3) on the backup server I created a new user on the OS level and called it SAPServiceSID, and gave it the type administrator, and also gave the same password that the sapserviceSID at the ERP server uses.

4) I shared a drive of my backup server at my ERP Server by going to computer --> Map Network Drive --> and gave it the location of the backup drive and gave it also $ at the end so it can see all the content of this drive.

5) i went to the file initSID.apr which is located under Drive:\app\Administrator\product\11.2.0\dbhome_1\database and changed the following parameters:

a) backup_dev_type = disk

b) backup_root_dir = \\IP of the backup server\Drive\Folder where the backup will be

c) compress = brtools

d) compress = yes

After  these steps I was able to have a backup (both online, and offline backups) on my backup server.

Thanks again for the effort you all gave me.

Best Regards

~Amal Aloun

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Amal,

  In your case, you can try the below option.

1.Map your network drive as a local drive in the server itself so that the drive will be like E: or G: something.

2.Then replace in backup_root_dir = E:\Amcan_APR_Backups

Or share that network folder  and in the backup_root_dir type as /servername/<shared folder name>

Both things should work.

If not , try to take a backup in a local disk and move it to your desired location as desired.

Thanks and Regards,

Vimal