cancel
Showing results for 
Search instead for 
Did you mean: 

Searching for Linux / SAN implementation experience

stefan_koehler
Active Contributor
0 Kudos

Hello,

i have opened this thread, because i am searching contacts for sharing experiences with Linux and SAN implementation.

The most of our SAP or Oracle systems are running on AIX 5L on different hardware platforms (Power4, Power5 or Power6). The hardware and OS itself is very stable but not the cheapest.

For the "non-highperformance" systems we are evaluating SAP on Linux on x86-blades (IBM HSx). The blades itself have no internal disks so we are performing SAN boot and the data disks for SAP or Oracle are also SAN disks. The SAN disks itself are located on an IBM DS8000.

I have tried with different distributions (SLES 10 SP2 or Oracle VM (RHEL5)) and it is nearly the same with every distribution. The SAN implementation works fine until we are using only one path to the disks (no MPIO!), but if we are trying to configure MPIO (the built in kernel module), the whole thing went down the crapper.

SLES 10 does not support MPIO with YAST or it crashes when some pathes are lost and it tries to recover.

RHEL5 does not support LVM by installation with MPIO (No LVM option is visible)

We have also contacted the different support teams, but the answer is always nearly the same (MPIO with the management tools will be supported in later versions - please use the command line). If you have problems (for example with freezing by losing paths) you are totally lost.

I can't imagine that somebody administrate the disks on the command line if you have more than over 100 paths. (for example with 4 paths for each disk)

I also spoke to some other oracle customers and the most of them have not a directly attached SAN environment - they use something like NetApp V-Filer and publish the disks with NFS.

How do you run your linux environment - with local disks - with SAN disks - or with a third party product like NetApp between your server and the SAN disk storage?

What is the experience of SAP with this?

Regards

Stefan

P.S.: Please also feel free to contact me via e-mail.

Accepted Solutions (0)

Answers (2)

Answers (2)

nelis
Active Contributor
0 Kudos

Hi Stefan,

We use HP Blades connected to SAN and SLES9/10 with local mirrored disks only for the OS. HP provide specific drivers for the internal Qlogic fibre cards that support multipath/fail over. I have tested the fail over on many occasions and it works just fine.

I'm not familiar with IBM Blades but in my opinion they should provide specific Linux drivers for their Blades that support MPIO.

One thing I can recommend is, if you are ever updating SLES, first unpresent all your SAN disks! What happens is during the update the drivers get overwritten and all of a sudden the kernel panics when it see's multiple disks, thats if you lucky enough to get past the boot loader installation Also, every time you do a kernel update you need to re-install the MPIO driver support.

Regards,

Nelis

stefan_koehler
Active Contributor
0 Kudos

Hello Neils,

thanks for sharing your experience.

> I'm not familiar with IBM Blades but in my opinion they should provide specific Linux drivers for their Blades that support MPIO.

Unfortunately not

IBM supports different HBA drivers for the different OS (SLES9/SLES10) but the MPIO module is everytime the kernel built-in one. The IBM blades are using QLogic QMC2462s. Here is an overview of the supported HBA drivers: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#HS_LS_Table

I also have experience with EMC - EMC also delivers a specific software to handle the I/O (PowerDisk).

How many SAN disks do you have to handle on one host? Does HP provides a disk management tool?

What is your SAN storage device?

Regards

Stefan

nelis
Active Contributor
0 Kudos

Hi Stefan,

How many SAN disks do you have to handle on one host?

On our development system we currently have 5 SAN disks...


srvslssap001:/opt/hp/hp_fibreutils # pvdisplay |grep "PV Name"
  PV Name               /dev/sde1
  PV Name               /dev/sdd1
  PV Name               /dev/sdc1
  PV Name               /dev/sdb1
  PV Name               /dev/cciss/c0d0p2
  PV Name               /dev/sda1

...the cciss device is the internal mirrored disk.

Does HP provides a disk management tool?

HP provide various tools for the Qlogic fibre cards so you can add SAN disks on the fly without needing a reboot etc The actual MPIO is handled by the driver and a kernel parameter so setup of the disks is done as normal via command line or YaST if you prefer. We are using LVM(2) as you can see above.

What is your SAN storage device?

We have a 2 x HP StorageWorks EVA 8000's and our production disks are replicated between the two SAN's which are located in separate data centre's.

The model Qlogic cards we use are..


srvslssap003:/opt/hp/hp_fibreutils # ./adapter_info -m
Adapter Models...
/proc/scsi/qla2xxx/1 - QLA2312
/proc/scsi/qla2xxx/0 - QLA2312

Regards,

Nelis

Former Member
0 Kudos

Hi

We have SLES9 and SLES10. We are having local boot disks and the data on SAN. I do not have the technical details, but as far as i know, there was quite a bit of manual tweaking needed to have the SAN disks working over two paths. I can remember we had times where the disks were gone after a reboot in the early days, or the system was stuck, when a path went down. So overall i can confirm your experiences, if i run into one of the sysadmins in the next days, i will try to get some detailed information.

Regards, Michael