cancel
Showing results for 
Search instead for 
Did you mean: 

parameter "_RESTART_TIME"

Former Member
0 Kudos

Hello,

our monitoring of the db-server shows I/O peaks in 10min. steps. Our first thought was, that this has to be caused by dbshadow-tool, which is setup with a delay of 10minutes. But a closer look shows us, that the timestamps of the peaks and the one from dbshadow doesn't match.

This lead us to deeper investigations of some parameters, and we detected the parameter "_RESTART_TIME" in the support parameter section of DBM-Gui.

This value defines the time between two savepoints and is set to 600 (s -> 10min.) by default. We assume that this setting is responsible for our load peaks on the database server.

Since this parameter is in the support section, we don't know if it's a good idea to change the value. And, if yes, what else do we have to keep in mind if we decrease/increase the time between two savepoints ???

a) decreasing leads to more savepoints, but with less amount of data to be written

b) increasing leads to less savepoints, but more data have to be written. Do we need to increase the amount of memory also ? (more data has to stored since flush to disk...?!?!)

any hints highly appreciated.....GERD....

Accepted Solutions (1)

Accepted Solutions (1)

lbreddemann
Active Contributor
0 Kudos

> our monitoring of the db-server shows I/O peaks in 10min. steps. Our first thought was, that this has to be caused by dbshadow-tool, which is setup with a delay of 10minutes. But a closer look shows us, that the timestamps of the peaks and the one from dbshadow doesn't match.

> This lead us to deeper investigations of some parameters, and we detected the parameter "_RESTART_TIME" in the support parameter section of DBM-Gui.

> This value defines the time between two savepoints and is set to 600 (s -> 10min.) by default. We assume that this setting is responsible for our load peaks on the database server.

>

> Since this parameter is in the support section, we don't know if it's a good idea to change the value. And, if yes, what else do we have to keep in mind if we decrease/increase the time between two savepoints ???

> a) decreasing leads to more savepoints, but with less amount of data to be written

> b) increasing leads to less savepoints, but more data have to be written. Do we need to increase the amount of memory also ? (more data has to stored since flush to disk...?!?!)

>

Hi Gerd,

by decreasing the parameter value you may distribute the write I/O traffic a bit more over time.

So, if your I/O system bottlenecks due to the high write I/O during the savepoint, this could at least shorten the bottleneck periods.

On the other side you really should check, why there is a bottleneck.

Use the DBAnalyzer detail files and check how many pages are written with each savepoint.

Check whether it's reasonable for your storage to handle this I/O load.

Also make sure to have the time measurement activated to get I/O timing information and to set the DBAnalyzer-Interval to, say, half of your RESTARTTIME value, so that you get two samples per savepoint period.

Concerning question b) No, you don't need more memory. What for?

If the Cache is completely used and dirty pages need to be flushed to the data area to make room for new pages, a savepoint is triggered anyway.

regards,

Lars

Former Member
0 Kudos

Hello Lars,

thanks for your explanation....sounds like we have to activate the db analyzer anyway

regards....GERD....

Answers (0)