Data Backup and Recovery
Data Backup and Recovery
Hello all
I admin a number of Oracle Sparc T series servers all of which currently have Oracle E-Business R12 application filesystems that are ZFS on top of EMC CX-4 provided raid LUNs. So EMC provides the redundancy and ZFS is there not for ZFS's checksumming facility, but for the snapshot and caching facilities (lots of RAM).
Going forward:
We are migrating to a Netapp FAS2552. I am currently investigating whether to stick with ZFS or switch back to UFS. I have already migrated a cloned development environment over to Netapp that is still using ZFS and the performance is great.
So really this question is a matter of do I stick with ZFS and accept the unsupported nature of this filesystem as far as most of the host side utilities are concerned or do i switch to UFS and try to leverage snapdrive/snapmanager, etc. and give up host side caching performance and zfs snapshot capability (easy update / upgrade rollback testing).
We don't have another cluster at this time so I cannot use SnapVault / SnapMirror. Ultimately I would like to be able mount flexcones (preferably from another host) and peform backups while minimizing Oracle downtime for the once a week COLD backup that we do.
Anyway, if you have already crossed this bridge, I would like to hear your feedback.
Solved! See The Solution
No worries with ZFS on ONTAP. Check out TR-3633 for details. I'm the author. That's a database-specific TR, but the ZFS information applies to anything with ZFS on ONTAP.
ZFS was always stable, but there were some performance glitches up until about 3 years ago until NetApp and Oracle engineering got together and figured out what to do with 4k block management. As long as you follow the procedures in 3633 it should perform fine.
One question - when you say T-Series boxes, are you using the LDOM functionality or are they running in full system mode? That's important, as if you're using LDOM's you can't really use things like SDU/SMO. The LUNs are re-virtualized by the IO domain, so all the child domains just see Sun LUNs. The source is hidden.
Assuming you're not using LDOM's, then I think the question depends on how dynamic the environment is. If you just want basic backup/restore ability and it doesn't happen that often then ZFS with SnapCreator would be fine. If you do have a more dynamic environment then ufs would make things easier as SMO/SDU would work. The problem with ZFS and cloning is the need to relabel the LUNs when cloning to the same system. You can do this with other LVM's because you can first clone the LUNs and then relabel the metadata before bringing it online. I don't know a good way to do that with ZFS. You can clone to a different server easy enough, but cloning to the original server is hard.
My apologies for not clarifying that. No LDOMs in use. Well actually I should technially say that there is the initial Root/IO domain/Global Zone that is installed and it has full resource allocation.
I do have many zones though, but I'm able to manage this by bringing the Zones down when messing with there underlying delegated ZFS filesystems.
And I have already seen the problem when Flexcloning to the same system (manually). ZFS sees that new volume as being already a part of an active zpool (the source).
I appreciate your prompt feedback. One of my main concerns has been is whether ZFS was completely stable on WAFL.
No worries with ZFS on ONTAP. Check out TR-3633 for details. I'm the author. That's a database-specific TR, but the ZFS information applies to anything with ZFS on ONTAP.
ZFS was always stable, but there were some performance glitches up until about 3 years ago until NetApp and Oracle engineering got together and figured out what to do with 4k block management. As long as you follow the procedures in 3633 it should perform fine.
I read the TR and wanted to let you know that the content and scope of it is great. Many questions that I had were covered in the doc. Very much appreciated.
are you aware of Oralce note NETAPP Perfstat Utility Reports Block Misalignment On Zpools with ashift: 12 (Doc ID 1550954.1) ?
I am not sure how to check the perfstat utility yet, but I do know that I chose Solaris_efi as the lun type and that zpool create started the partition offset at 256 and that ashift reports 12, however this doc leads me to believe i should have chose solaris instead of solaris_efi
The cause information follows:
IO was properly aligned when leaving Solaris but became unaligned due to the "solaris_efi" lun-type on the NETAPP array. This detail is completely invisible to Solaris and requires information available only to NETAPP and the NETAPP array admin. The NETAPP lun-type "solaris_efi" is not intended nor required for all Solaris LUNs with EFI labels, but rather only for LUNs where Solaris slices start at an LBA of the form N*8+2 (e.g. the format(1m) default s0 starting at LBA 34 (34 == 4*8+2)). The lun-type "solaris_efi" causes the NETAPP array to add an offset of 6 to the LBA of each incoming SCSI command so IO which is 4KB aligned within such a slice will be 4KB aligned on the NETAPP WAFL filesystem underlying the LUN. While format(1m) continues to start s0 at LBA 34 by default, modern ZFS partitioning of a "whole disk" starts s0 at LBA 256 providing a 4KB alignment of the start of slice s0. Thus the "add 6 shift" caused by lun-type "solaris_efi" is now forcing the aligned IO to become UNALIGNED. The correct NETAPP lun-type is "solaris" for zpools created on "whole disks" with ashift:12.
That makes sense. The Solaris_EFI causes that hidden offset to be present, and it misaligns everything. The regular "Solaris" LUN doesn't have the hidden offset.