Data Backup and Recovery

DB2 Vmware VMDK vs RDM

ANDREAS_JANKOWIAK
10,261 Views

Hello,

We would like to setup a new Vmware landscape running virtual machines each with a DB2 database.

The NetApp Storage will be attached via FC for the datastore(s).

I'm thinking of two ways:

1. Using datastores attached via FC and setup the VMs /  OS with the DB2 databases using the VMDK files in the datastore.

2. Using the datastore for storing the virtual maschines (only the operating system)  but connecting the LUNs needed for DB2 via RDM.

Is there a prerequisite for the SC db2 plugin to use RDM attached Luns to snapshot the db2 database?   Or is is also possible when the db2 db lifes in (several)  datastores?

Or should I use the VI plugin for snapshotting my vms in combination with the db2 plugin for application consistence?

Thanks!

Best Regards,

Andreas

17 REPLIES 17

oommen
10,203 Views

Andreas,

We have tested both ways with SC. In first case you would need the vibe+db2 plugin.

The use of RDM is easy to manage and you will be able to make the recovery process more granular at the database level. So far I haven't got a chance to compare the performance difference b/w RDM and database being on the vmdk files along with the OS. Please do let us know if you have any concerns/questions.

I would use 2 LUNS, one from each volume for the database - One for the Data LUN and the second LUN for the transaction LOGS. The recovery is done at the Data LUN and we let DB2 do the RF operation to the desired PIT.

Regards,

Bobby.

ANDREAS_JANKOWIAK
10,203 Views

Hi Bobby,

thank you!

I also think that RDMs are the better choice. Another option would be to use several datastores (VMFS) for the different databases (for data files and log files), separated from the OS, but I think it's easier to handle RDMs, right?  But as I understand it would also be ok to use several datastores with VMFS, SC DB2 plugin would work.

And maybe I have better security when using NPIV and RDMs (Virtual machines only have access to specific LUNs).

We have a database with 3 TB data, I'm thinking of using one or two volumes, one on each controller and maybe use 2-3 LUNs in each volume with 500GB per LUNs.  Do you think that the performance will be better when I use more LUNs instead of using one big LUN per volume?  Using more LUNs I could use more virtual SCSI controllers. Is there a Netapp best practice  ?  Maybe a howto document for DB2 with FC I could only find NFS ?

Thanks you!

Best Regards,

Andreas

steiner
10,203 Views

Bobby might have more to add on this in a bit.

I'd recommend going with around 4 LUNs for the datafiles and maybe 2 LUNs for the logs and other stuff. If you're going with Windows, divide up the datafiles among drive letters. If you're going linux use a volume group. The important thing is to isolate the datafiles into a single volume group. That allows you to restore those LUNs and thus the datafiles independent of the other data. You'll have to balance that with the limited number of RDM's you get. If you go with fewer than 4 LUNs you can run into some performance bottlenecks related to SCSI operations. It's not that the 'one big LUN' approach is bad, but you will lower the performance ceiling. As long as IO is low, no problem.

I don't usually bother with striping unless you know you're going to have a ton of sequential IO, in which case you probably wouldn't be virtualizing anyway. Just make a regular volume group with the logical volumes sitting on extents distributed across all of the LUNs.

I've worked with a number of projects testing databases on VMDK/VMFS and it shows a marked decrease in potential performance compared to RDM's or iSCSI/NFS mounted to the guest itself. Again, it's not bad performance, but it will lower the ceiling, especially if you have a lot of writes.

oommen
10,203 Views

Andreas,

I have tested with multiple vmdk's (one for OS, one for data and another for Log) in SC using DB2/VIBE plugin. For sure I would use RDM LUNS in this case given the performance challenge we could have with vmdks'.

Yep, like Jeff stated, distribute the tablespace containers across multiple LUNS.

Couple of questions..

1. Do you know the workload characteristic?

2. Also is this database DPF enabled, if that is the case we might have to do a different layout.

Regards,

Bobby.

ANDREAS_JANKOWIAK
10,203 Views

Hi Steiner, Hi Bobby,

thank you very much, very helpfull!

Couple of questions..

1. Do you know the workload characteristic?  --> Not yet but soon I will

2. Also is this database DPF enabled, if that is the case we might have to do a different layout. --> DPF is not used.

Another question:

Do we need SnapDrive for Unix in order to work with the  DB2 plugin?

Do you recommend to always use SD in an FC / iSCSI environments?

Thanks!

Best Regards,

Andreas

oommen
10,203 Views

Andreas,

Do we need SnapDrive for Unix in order to work with the  DB2 plugin? ==> You don't need SDU, DB2 tablespaces doesn't use the FS caching, and we don't have to worry about flushing the FS buffers. If you have plans to use Windows, make sure you use SDW.

Do you recommend to always use SD in an FC / iSCSI environments? ===> No need to flush the FS buffers, but you will have to use OS commands to mount/remount the FS back during a recovery.

Regards,

Bobby.

ANDREAS_JANKOWIAK
10,204 Views

Hi Bobby, hello to all,

thanks again!

I've setup the following filesystems now for testing:

/dev/sap_TST/db2TST  /db2/TST             ext3       acl,user_xattr,nofail 1 2

/dev/sap_TST/sapmnt  /sapmnt/TST          ext3       acl,user_xattr,nofail 1 2

/dev/sap_TST/usrsap  /usr/sap/TST         ext3       acl,user_xattr,nofail 1 2

/dev/db2_TST/instanz /db2/TST/db2tst      ext3       acl,user_xattr,nofail                1 2

/dev/db2_TST/sapdata /db2/TST/sapdata1    ext3       acl,user_xattr,nofail                1 2

/dev/log_TST/logdir  /db2/TST/log_dir     ext3       acl,user_xattr,nofail                1 2

/dev/sap_TST/arch    /db2/TST/archive     ext3       acl,user_xattr,nofail 1 2

/dev/sap_TST/db2adm  /db2/db2tst          ext3       acl,user_xattr,nofail 1 2

You see I work with three volumes (db2_TST exists of 4 LUNS, containing sapdata and db2 instance directory) and several logical volumes. I think that should be possible.   I setup it up with nativ linux commands.

My first db restore was successfully.

One more question regarding snapdrive maybe you can help:

With snapdrive it's possible to setup a whole linux lvm, so with "one" command I setup the Lvm, the logical volume, the luns, the IGroup mapping and the filesystem mount point, e.g snapdrive storage create -fs /mnt/test -fstype ext3 -hostvol sap_IVV/sapdata -lun <fas>:/vol/TST_SOFT/TST_ARCH2/lunE qpna01:/vol/TST_SOFT/TST_ARCH2/lunF -lunsize 100M.  

My problem: The command creates the vg with one lv named sapdata and mounts it to /mnt/test.  But I want to have several lvs mounted to several mountpoints like in fs example above. Is this possible? I can't find any option....

Same problem with storage connect command like snapdrive storage connect -hostvol sap_IVV/sapdata -fs /mnt/TST -lun <fas>:/vol/TST_SOFT/TST_ARCH2/lunE qpna01:/vol/TST_SOFT/TST_ARCH2/lunF.  I can only connect the vg sap_IV with lv sapdata to /mnt/TST. How can I mount several LVs to different mount points?

Do I missunderstand something?

Thanks you!!

Best Regards,

Andreas

ANDREAS_JANKOWIAK
10,204 Views

Anyone?

Best Regards,

Andreas

ktenzer
10,205 Views

Hi Andreas,

Sorry about delay here. We dont specialize in snapdrive itself but I have asked some folks that know it if they can comment and have solution.

Keith

oommen
9,473 Views

Andreas,

Sorry for the delay. I haven't use SDU with SC for Unix so far. I use the PRE/POST steps from SC to do the mount/re-mounts. I am asking around and will update you soon.

Thank you for your patience,

Bobby.

ANDREAS_JANKOWIAK
9,474 Views

Hi Bobby, Hi Keith,

thanks for your help!   To do the (um)mount via SC is a good idea.

Does the file system structure above look conclusive for you?

Best Regards,

Andreas

oommen
9,473 Views

Andreas,

Sure. One concern from my part is, DB2 instance libraries being part of DATA volumes. I am sending a FS layout which I usually discuss with customers during the pre/post sales engagement.

In the above case - All the DB2 and SAP binaries will be separated from physical database files. You will do the snap restore only for sid_sapdata and run the db2inidb with the mirror option which would put DB into a RF pending state. You could have multiple Data volumes or can have a TEMP vol if you wanted to keep the DB2 Temp tables separate.

Regards,

Bobby.

ANDREAS_JANKOWIAK
9,473 Views

Thank you!

/dev/db2_TST/instanz           /db2/TST/db2tst      ext3       acl,user_xattr,nofail               

/dev/db2_TST/sapdata        /db2/TST/sapdata1    ext3       acl,user_xattr,nofail        

DB2 instance libraries is already part of the data Volume db2_TST.

BR,

Andreas

oommen
9,475 Views

Andreas,

In order to preserve the DB2 instance libraries, I would keep it out of DB2 data volumes.

Regards,

Bobby

ANDREAS_JANKOWIAK
9,476 Views

Hi Bobby,

mhmmm with NFS  and DB2 you recommended to locate the instance folder /db2/<SID>(<db2<sid>)  on the data Vol, why not with FCP ?

Also in your recommended FS layout (screenshot) it's located in the data vol.

BR,

Andreas

oommen
7,519 Views

Andreas,

I see what is the confusion.

/db2/<SID>/db2<sid> is not instance file - it is part of the database where you have the pointers to the system catalog tables. if you do a 'db2 list db directory' - you will see the path for it.

/db2/db2<sid> is the location for instance files, where you will have the sqllib directory as well as the DB2 software files if you haven't changed the default SAP install location for DB2.

please let me know if you have any concerns.

Regards,

Bobby

ANDREAS_JANKOWIAK
7,519 Views

Hi Bobby,

yes this was a misunderstanding ;o)

We don't want too many volumes per system so we would locate the /db2/db2<sid>  in the volume TST_SOFT which will hold binaries for db2 and SAP.

Best Regards,

Andreas

Public