cancel
Showing results for 
Search instead for 
Did you mean: 

Double copy of archive redo with tsm backint

benoit-schmid
Contributor
0 Kudos

Hello,

We are going to migrate to TSM backup.

I would like to get example of TSM configuration for brarchive double copies (on two different tapes) of the redos.

1. Could you provide me your sample configuration for brarchive and tsm?

2. Do you run a brarchive with -s and let tsm handle double copy or do you run brarchive -ss?

Thanks in advance for your answer.

Accepted Solutions (1)

Accepted Solutions (1)

volker_borowski2
Active Contributor
0 Kudos

> Hello,

>

> We are going to migrate to TSM backup.

>

> I would like to get example of TSM configuration for brarchive double copies (on two different tapes) of the redos.

>

> 1. Could you provide me your sample configuration for brarchive and tsm?

> 2. Do you run a brarchive with -s and let tsm handle double copy or do you run brarchive -ss?

>

> Thanks in advance for your answer.

Hi,

TSM shops I know usually use brarchive with "-sd" to do a "single" backup from the SAP side.

The mirror is done in TSM.

Either by providing two management classes for REDO, which will result in the archive log will be

effectively transferd two times over the net, if you like in parallel sessions.

Or by setting up a copy of the saved objects in TSM after they are saved. This is a little less secure,

because if the log is distroyed before the copy is done you have a gap.

Volker

benoit-schmid
Contributor
0 Kudos

>

> Either by providing two management classes for REDO, which will result in the archive log will be

Could you please provide me an example of the backint utl file with two classes?

From what I understand, if two classes are defined in utl file, backint backups

the redo twice.

Am I right?

>

> effectively transferd two times over the net, if you like in parallel sessions.

> Or by setting up a copy of the saved objects in TSM after they are saved. This is a little less secure,

> because if the log is distroyed before the copy is done you have a gap.

Could you send me a copy of the tsm server config for doing that?

I could use it to better understand and discuss with my TSM admin.

Thanks in advance for your answer.

volker_borowski2
Active Contributor
0 Kudos

Hi,

an example util file for tdpr3 is here

(link in german, but should give you enough param-names to find the corresponding english documentation):

http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com.ibm.itsm.erp.doc/r_dperp_c_...

The management classes are defined like this


  BRBACKUPMGTCLASS    MDB                # Verw.klassen für Datenbanksicherung
  BRARCHIVEMGTCLASS   MLOG1 MLOG2        # Verw.klassen für Redo-Log-Sicherung

If you only have a single archive session, both copies will be done serialized.

If you have two, the first session will write to MLOG1 and the second to MLOG2.

I only know the client side of tsm.

I do not know what to do, to copy an already backed up object inside TSM from one

class to another, but I am sure it is possible to do this.

Sorry, if this is only half the stuff you need

Volker

benoit-schmid
Contributor
0 Kudos

Hello Volker,

Thanks for the link.

Things are clearer with this sample utl configuration file.

As you use TSM backint, I have a second question.

Have you ever try to increase MAX_BACK_SESSIONS

when you wanted to speed up an offline backup?

If yes, has it significantly decreased the backup time?

Thanks in advance for your answer.

volker_borowski2
Active Contributor
0 Kudos

> Have you ever try to increase MAX_BACK_SESSIONS

> when you wanted to speed up an offline backup?

> Thanks in advance for your answer.

Hi,

the entire session stuff depends completely on your environment and hardware.

If you have only 1GBit network connection, and speedy backup devices,

you might be able to utilize the entire network band width with a single session.

If you have slow tapes, two sessions might be better, not even for backup but esp. for restore.

Our big DB that is backed up with 6 sessions and multiplexing set to 6 as well,

which means 36 datafiles are read at the same time. This requires some disk read performance.

This system is 10GBit connected and we utilize roughly half of it.

But the biggest gain you get when twisting BUFFSIZE (carefully, you get core-dumps when overtwisted).

It is an overall Buffer for all sessions, and it's range depends on tdpr3 client version.

Out Big Beast ist configured this way:

MAX_BACK_SESSIONS 6

MAX_RESTORE_SESSIONS 6

MAX_ARCH_SESSIONS 2

REDOLOG_COPIES 1

RL_COMPRESSION NO

MULTIPLEXING 6

BUFFSIZE 4193792

Hope this helps

Volker

benoit-schmid
Contributor
0 Kudos

Hello Volker,

>

> But the biggest gain you get when twisting BUFFSIZE (carefully, you get core-dumps when overtwisted).

> It is an overall Buffer for all sessions, and it's range depends on tdpr3 client version.

Could you tell me which method you have used to tune BUFFSIZE on your servers?

Thanks in advance for your answer.

volker_borowski2
Active Contributor
0 Kudos

Sorry, I do not remember exactly...

But I'll try to recompile

There is an overall BUFFER which must feed all the sessions that are possible concurrently active.

I think in the version we had then, it had a size of 32MB.

The BUFFSIZE Parameter was the amount of space dedicated to a single session.

As we have possibly 6 backup and 2 archive sessions open, the maximum,

that a single session could get was 32M/8=4M (a little less)

If we utilize more than 6 sessions, we need to reduce this buffer, otherwise we get memory faults.

The 128K default in the example file is way to low. I think it is still related to very old versions of the client.

The was an IBM document about how this value should be calculated (which i currently do not find again)

and you should also crosscheck your current version releasenotes. May be they inreased the BUFFER to 64M meanwhile.

And there was something that was related to the compression, I do not remember exactly. I think, if

you use compression (RL-compression ?) the numer of the buffers is doubled, so you can only make

BUFFSIZE half as big in that case. When we did so, the data was not fed fast enough to keep the tapes streaming,

so we decided to switch of compression and use the bigger buffer.

Oh, and having jumboframes and a dedicated 10G LAN interface for backup helps as well

If you gdo not have 10G adapters, may be you can can team up two or three 1G adapters.

Volker

benoit-schmid
Contributor
0 Kudos

Thanks Volker for your details informations.

I close the thread.

Answers (0)