cancel
Showing results for 
Search instead for 
Did you mean: 

Performance question : on raid 10 disks

fabricebourdel
Participant
0 Kudos

Hi,

On a dedicated MaxDB server with raid 10 with 6 disks (3 stripes of 2 mirrored disks) and 256Mo caching capabilities, that store data on raw device, what is the better size of the stripe (that commonly is 64 or 128Ko in comparison to the 8Ko data pages)

Maybe it's is better to have an odd number of stripes ? then i could reorganise to have one system disk, one "spare disk" and 4 raid 10 disks (2 stripes of 2 mirrored disks) because I read that for raw devices, a stipe size should equal the primary io size (8Ko)...

greats !

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

Hi Fabrice,

what hardware do you use? Is it a SAN? What about the size of the internal cache? From my point of view this is more important than the size of the stripes.

Regards

André

fabricebourdel
Participant
0 Kudos

Hi André,

The hardware i will use will be "local", physicaly attached on the machine, not SAN.

The scsi card have 256 Mb cache.

Then, the data cache for MaxDB will be the most important thing...

lbreddemann
Active Contributor
0 Kudos

> Maybe it's is better to have an odd number of stripes ? then i could reorganise to have one system disk, one "spare disk" and 4 raid 10 disks (2 stripes of 2 mirrored disks) because I read that for raw devices, a stipe size should equal the primary io size (8Ko)...

Maybe it is - maybe not.

The only way to figure this out: test it, measure it.

If the different I/O order sizes really make up a big difference totally depends on the hardware you use.

regards,

Lars

fabricebourdel
Participant
0 Kudos

Hi Lars

yes, i will have to test different configurations... but that's may be a long process... and it is hard to "simulate" production environment...

Just one thing about the stripe size, i first thought i should do a multiple of the data page (4 times 8Ko, 8 times 8Ko or more) because MaxDB doing some caching. But when i read that it should equal the primary io size, the MaxDB reads or writes will then do much more io orders to the scsi card...

I saw a MaxDB parameter who indicates the number of "blocks" that have to be written from cache : 64 for the production machine, 80 for another. Also, the number of blocks * 8Ko, it's a bunch of data to write in one time... and with a little stripe, it's a lots of io orders...

it will be not simple to check this, the efficiency of the read write operations, for all type of operation the the database need to do....

lbreddemann
Active Contributor
0 Kudos

Hi Fabrice,

to which parameter do you refer?

For the response time of your database it's primarly important, how quick MaxDB can perform read I/O since all the writing to the disks will be done asynchronous by the pager tasks.

In general for databases you shouldn't rely on storage caching - at least not on write caching, since this may result in data loss when the system fails.

If writes are cached by the storage system, than the database will consider them OK - although they might get lost.

This would be especially critical if this applies to your Logvolumes.

And, btw, yes - performance testing is difficult.

So perhaps it would be a good idea to measure the I/O performance now and try to figure out, if it's actually a bottleneck to your response time.

If it is not - then you won't gain anything from improving it.

First step should be: enable DBAnalyzer, enable Time Measurement to get some I/O timings, gather some os-based I/O stats.

best regards,

Lars

fabricebourdel
Participant
0 Kudos

...i was referring to the "DATA_IO_BLOCK_COUNT", there is also an "LOG_IO_BLOCK_COUNT", that controls the numbers of blocks that MaxDB uses when writing from data cache to disk....

and thx for the advices about the DB Analyser.

I think it will not be difficult to have more performances between the "old" production system and the new that come (4Go RAM -> 16 Go RAM, 2 procs -> 4 procs, SATA disks -> SCSI RAID 10 disks, One volume (data and log) -> more volumes, MaxDB 32 -> MaxDB 64, ...), but it will more difficult to find the configuration on the new machine about the disk subsystem that give the best performances...

Just a simple question : on a strictly identical configuration, does MaxDB 64 run quicker/better than MaxDB 32 ?

lbreddemann
Active Contributor
0 Kudos

> ...i was referring to the "DATA_IO_BLOCK_COUNT", there is also an "LOG_IO_BLOCK_COUNT", that controls the numbers of blocks that MaxDB uses when writing from data cache to disk....

As I've written: since writing to disk happens asynchronous I wouldn't change these settings.

> I think it will not be difficult to have more performances between the "old" production system and the new that come (4Go RAM -> 16 Go RAM, 2 procs -> 4 procs, SATA disks -> SCSI RAID 10 disks, One volume (data and log) -> more volumes, MaxDB 32 -> MaxDB 64, ...), but it will more difficult to find the configuration on the new machine about the disk subsystem that give the best performances...

Question would be: what do you do when the performance gain is not equally large as the increases investment? "Killing it with iron" may work or not. E.g. you may not see any improvement from doubling the processor count if your application is not designed to do things in parallel.

> Just a simple question : on a strictly identical configuration, does MaxDB 64 run quicker/better than MaxDB 32 ?

Both version run stable. Both do the very same things with your data.

But with the 64-Bit version you don't have the process adress room limitations - you can use more memory.

So, whenever possible you should use the 64-Bit version, no question about that!

regards,

Lars