2015-01-30 05:20 AM - edited 2015-12-18 12:20 AM
I would like to ask for an advice how to prepare disk layout for a customer running Baan/Infor ERP on Oracle DB 11R2 (single instance).
He has 2 controllers (7-Mode), wants to run Oracle DB with ASM on OracleLinux 6.5+ on OVM 3.3 host on FC SAN evironment, database is about 4TB.
All LUNs should be mapped to virtual server as RDM (1 LUN for OL6, other LUNs for ASM). Each FAS controller has only one aggregate.
We want to use SnapCreator together with Oracle Plugin on client to take consistent snapshots of Oracle Database and have to prepare appropriate disk layout for this.
I have several questions about ASM/OVM and SC:
1. What type of XEN virtualization for guest to use? PVM or HWPVM? PVM should have better performances. Are there any potencial troubles using PVM?
2. what LUN/Volume Layout should be used? Is this (bellow listed) usable?
a. 1 LUN in dedicated volume for OS partition (OL6) - we need RDM LUN for OS partitions
b. let's say 8 LUNs of 500 GB on one controller for ASM group for only datafiles. How many volumes to use for theese LUNs? Is it better to devide it to more volumes (let's say 2-4 volumes with 2-4 LUNs inside) and using ConsistencyGroup setting in SnapCreator?) or just use one big volume with all LUNs inside? The database is going to grow...
c. let's say 4 LUNs of 500 GB for FRA ASM group on another controller. What would be the layout if we would not use FRA and devide archlogs in redologs? 2 ASM groups- 1 for archlogs, 1 for redologs - each in its own volume
3 . what should be ASM disk group redundancy? Probably EXTERNAL is ok for all ASM groups because of raid-dp on FAS controller... There is mostly "NORMAL" redundancy set for datafiles in installation guides, but it seems to me like a wasting of capacities if there is a raid-dp on the SAN controller.
4. should be used ASMlib or just mapped raw devices when creating ASM disk groups?how to ensure propper LUN assingment? There is a NetApp plugin for Oracle VM installed on OVM hosts
I have tested ASMlib - partitions on RDM devices created with parted and presented to ASM - I have checked LUN alignment on the controller and it seemed to be alligned properly...
5. taking snapshots with SnapCreator: Virtual server OL6 running OraDB (SnapCreator client) on OVM host is going to use RDMs for ASM diskgroups and OS disk. As there is no file-system cache on DM-MPIO devices on OVM, it should be enough to use SC-Oracle plugin to take consistent snapshot of the database (setting archlog or FRA volumes to META_DATA_VOLUME in SC configuration). Customer wants to use snapshot just as a quick recovery option, it is not the only backup solutoin, no snapmirror or snapvault is used.
6. Restore - what should be the procedure for restore using SnapCreator?
a. shutdown database and guest OL6
b. restore snapshots of datafile volumes
c. restore snapshots of archlogs volumes (or FRA volumes if used) in case if archive logs have been already deleted and they are no more available for database recovery according to datafiles snapshot timestamp
d. run guest OL6
e. start and recover ORA database
f. rotate logs
7. Clone - are there any limits/specialities if I want to use clone of ASM snapshots on another virtual server for running test instance of ERP or for "single table" restore procedure?
Thank you in advance for any suggestions.
Solved! SEE THE SOLUTION
2015-02-02 10:55 AM
1) I prefer the PVM approach. That always yields better performance. The only time I would use HVM is if there was no way to get PVM drivers installed on the system.
2) Your LUN/volume layout looks good. Unless you expect a huge amount of IO, there is no reason to separate LUNs among volumes. I usually put all 8 of the datafile LUNs in one volume and the 4 archive/control/redo LUNs in a second volume. That means 2 diskgroups in all. Also, make sure the spfile for ASM is not in the datafile diskgroup.
3) Use External redundancy.
4) Whether you use ASMlib or udev rules is up to you. If you're using OL6, you might as well go ahead with ASMlib. It's included and it's overall easier to use. The Plugin is mostly useful for cloning hosts, not management of databases.
5) dm-multipath exists on OVM, and I'd recommend you use that. That's the only way you'll have resilience.
6) For backups, you're correct. The datafile diskgroup is in VOLUMES and the control/arch/redo diskgroup is in META_DATA_VOLUME.
7) Your restore procedure is correct. Odds are a recovery would only require the datafile volumes to be restored, but if someone destroys the archive/control/redo diskgroup you can recover that too.
8) Cloning is a pain. You'll have to manually make the clones of the volumes, discover them on the OVM server, and then map them to a different guest. You can't bring them back to the original host because there would be duplicate diskgroup names.
2015-02-04 06:16 AM
Thank you for your advices.
While I was testing this, everything seems OK and working. I have some doubts about file system consistency on ASM block devices...
My configuration (for test purposes only, on the same environment as mentioned above):
- 4 LUNs (1 volume) connected though OVM (FC MPIO) to PVM guest as RDM, devices /dev/xvdb - dev/xvde, assigned to ASM disk group +DATA by ASMlib as /dev/oracleasm/DATA1-DATA4
- 2 LUNS (1 volume) connected though OVM (FC MPIO) to PVM guest as RDM, devices /dev/xvdf - dev/xvdg, assigned to ASM disk group +FRA by ASMlib as /dev/oracleasm/FRA1-FRA2
If I try to make a snapshot using SC + Oracle plugin and the database has been put in a backup mode - at the moment of taking Data-volume snapshot (over +DATA ASM disk group) and block devices /dev/xvdb-e on PVM guest - is there any possibility that some data are still in PVM OS cache for block devices /dev/xvd*? I can theoretically run "sync" OS command to flush caches, but I would like to be sure there are no data in cache on the "logical" journey from oracle data file on ASMlib disk - /dev/xvd* - OVM DM-MPIO device - Netapp LUN...
The same situation is at the moment, when the database has been put to the normal mode and redo logs have been switched. In this moment SC takes snapshot of META_DATA_VOLUME. The question is this: have all buffers really been flushed to LUN and the last written archive log is consistently written on disk (+FRA - /dev/xvde-f - OVM - LUN), so snaphot of FRA volume is consistent and usable for restore if needed.
I just want to be sure that some data will not stay in cache somewhere on it's way to controller's LUN and therefore the snapshot is not usable for restore...
2015-02-04 06:23 AM
ASM devices are opened with O_DIRECT, so there is no risk of having anything buffered when the snapshots are created.
One other thing to mention - if you have any ASM diskgroup spanning volumes, you need to ensure the following options are in the config file:
If you do NOT span volumes with a diskgroup, then set it to just:
2015-03-05 05:02 AM - edited 2015-03-05 05:03 AM
thank you once again for your advices, it works really nice . I would like to ask another question - our client plans to upgrade from OraDB 11R2 to 12C - installation will be on the new virtual guest on OVM hypervisor, so let's say identical environment...
Are there any changes necessary in ASM groups design because of using OracleDB 12C instead of 11R2? Or any other limitations?
Or I can use quite the same design of ASM groups (based on ASMlib) - it means devided groups for data, fra/redo, archlogs and control files?