Network and Storage Protocols
Network and Storage Protocols
Hi all,
I am designing storage system for hyperv environment.
The environment is such:
- Multiple Microsoft independent clusters consisting of 4 physcial nodes
- the customer wants to assign 5 (4TB) luns to each cluster and create around 50 virtual machines (vhds) per each LUN
We are planning to use SATA disks with two 512GB PAM cards. Is there any best practice about using PAM with hyperv.
How do I decide about the fractional reserve and snap reserve for each LUN in this case.
Thanx in advance.
Regards,
Babar
Babar -
Two very interesting questions - and they are inter related. It's a complicated set of subjects, and the answers pertain to all virtualized environments.
Fractional reserve and snap reserve. In most cases for a SAN environment snap reserve is set to zero, given that fractional reserve is a seperate mechanism for doing the same thing.
The default fractional reserve of 100% is over kill. A 100% rate of change is rare in most day to day environments. This is most often true of data stores holding an OS, which does not change very much once it has been set up.
When fractional reserve or snap reserve are to be adjusted I measure the rate of change with 'snap list' and 'snap delta' and adjust accordingly.
There are many strategies for implementing thin provisioning of LUNs in a SAN environment. I'm fond of turning off all space reservations - set the space guarantee of the volume containing the LUNs to 'none'. I turn on auto grow and snap auto delete with try-first set for auto grow. (I prefer not to delete snapshots - they're taken for a reason!)
But there's another strategy you may wish to consider for optimum space utilization and cache (PAM) performance. Don't make 50 copies of these LUNs (VMs). Make 50 clones. The space savings for cloning virtual machines is huge. ( 50 X 20G = 1TB , 50 clones of 20G = ~40G ? ). But there's another advantage in cache performance. When you're using clones Data ONTAP only needs to cache the shared blocks once, instead of cache holding 50 copies of the same blocks. Be sure to keep your data sets seperate from your OS images to segregate your rate of change in snapshots.
I hope this response has been helpful to you.
At your service,
Eugene Kashpureff
(P.S. I appreciate points for helpful or correct answers.)
Hi Baber & Eugene,
"There are many strategies for implementing thin provisioning of LUNs in a SAN environment. I'm fond of turning off all space reservations - set the space guarantee of the volume containing the LUNs to 'none'. I turn on auto grow and snap auto delete with try-first set for auto grow. (I prefer not to delete snapshots - they're taken for a reason!)
But there's another strategy you may wish to consider for optimum space utilization and cache (PAM) performance. Don't make 50 copies of these LUNs (VMs). Make 50 clones. The space savings for cloning virtual machines is huge. ( 50 X 20G = 1TB , 50 clones of 20G = ~40G ? ). But there's another advantage in cache performance. When you're using clones Data ONTAP only needs to cache the shared blocks once, instead of cache holding 50 copies of the same blocks. Be sure to keep your data sets seperate from your OS images to segregate your rate of change in snapshots"
I like to do both of these, with a twist.
First, even if I am not planning on overcommitting the storage, I like to remove the reservations at set the guarantee "none". I don't use autogrow (I rely on monitoring to decide if/when I need to grow) and I do use snapshot autodelete. I do put all my luns in qtrees, then remove the lun space reservation and set a threshold quota on the qtree. The threshold quota is a soft quota that also sends an SNMP trap to the monitoring tool of my choice. I do try to think out the layout: I put the OS on a VHD, the majority of the page file on a seperate VHD, and application data on one or more additional VHDs. I group related OS VHDs on a lun, then I can clone that LUN in the volume to create a like layout. I turn dedupe on for that "OS" volume. I put the page file VHDs grouped on LUNs that reside in a seperate volume that is not deduped or overcommitted, but still has the same qtree/quota monitoring. I find that an easy way to figure out what optimal page file sizes are. I create seperate volumes for app data with groupng and dedupe options depending on the nature of the data. Here's an example I started on the other day:
The volume Hyperv1 is 500GB and has no reserve or guarantee. I have enabled dedupe on the volume. Inside I have two qtrees, each containing a LUN. I have a 100GB thin LUN that contains a single VHD with a WIndows 7 host at the moment. I also have a 200GB LUN that contains 3 VHDs; a windows 2008 R2 domain controller, a Windows 2008 R2/Exchange 2010 Hub/Cas and a Windows 2008 R2/Exchange 2010 mailbox server. For the three windows servers, I started with a sysprep image then just cloned the lun and did unattended/automated installs and then patched up all the hotfixes and service packs. I can look in with the Data ONTAP Powershell Toolkit v1.2 and have great visibility to what's actually happening to my space:
When you enable Flexscale for the Flash Cache, you cache metadata and normal data blocks by default. That's the mix you want in a Hyperv environment. Caching Lopri blocks is disabled by default. You wouldn't enable it unless you had a need to cache long read chains (sequential reads). If you're running multiple VMs, you won't see long read chains; the read mix will become more random.
JohnFul