ONTAP Discussions

Volumes LUN Count, Size and Volume Parameters for VMware DataStores

BryanM
6,063 Views

I wanted to take a poll from the community on VMware DataStore sizing using NetApp FAS with Hybrid Aggregates. Without too much detail I wanted to see if I could some information on general sizes seen in the field. Along with size I would like to understand what the community does as far as volume parameters are concerned. Things like LUN count per volume as well.

 

 

I have been working on some of the following parameters as a sample myself and was looking for some input.

 

Thin volume with space-reserve disabled for LUN

 

-security-style unix
-percent-snapshot-space
-snapshot-policy
-space-guarantee
-autosize
-autosize-mode
-autosize-increment

-max-autosize

-autosize-grow-threshold-percent
-space-nearly-full-threshold-percent

 

Vol efficiency and so forth..

 

1 REPLY 1

asulliva
5,896 Views

Hello Bryan,

 

I know I may not be the "community" you're referring to, being an employee, but I can offer some thoughts and insights.  I'll also drag @EricNTAP in as well as we are two of the VMware TMEs for NetApp.  

 

I will note that what I have to say might not necessarily reflect the "best practices", and may not be right for you, so please evaluate each setting and determine what you beleive to be the correct value for your specific situation.  Monitoring, beforehand to set a baseline and after to ensure any changes have the intended effect, is critical.  Additionally, particularly for the capacity centric things, setting your alerting thresholds appropriately so that you/your organization has time to *appropriately* react is vitally important.  That might mean 70% used is "full", or it might mean 95%, it's entirely based on your organization.

 

Datastore sizing comes down to, primarily, two things: how quickly do the post-process jobs finish (e.g. dedupe, compression), and does the RPO/RTO meet your needs.  Since FlexVols can be very large these days, their size is no longer a significant concern.  However, ensuring dedupe and compression tasks finish in a reasonable timeframe is important.  Likewise, if you can't replicate/backup the amount of data in a timespan that meets your needs, then the datastore is probably too large.

 

LUN count per volume has no bearing aside from what I noted above...make sure the jobs/tasks you need to accomplish at the volume level are happening.  Also, should be obvious, but make sure the OS type for the LUN and iGroup are set correctly.

 

Beyond that, looking at individual settings there are a number of different things that can affect them.

 

Security style.  Unix makes life a lot easier for NFS (make sure the usermap is configured correctly if using NTFS with NFS datastores), and shouldn't affect LUNs either way.

 

% Snap Reserve.  There are two schools of thought here, 1) set to 0 and manage it like any other type of space, 2) set it to some amount of space which meets or exceeds the amount of changed data you expect in your volume.  I tend to go with 0% snap reserve and use monitoring to make sure that I won't run myself out of space (just deleting data doesn't release the space, the snapshots must be deleted).

 

SnapShot Policy.  I use "none" and rely on VSC for snapshots.

 

Space Guarantee.  For volumes, I recommend either none or volume.  Reiterating my thoughts on monitoring above, if you're comfortable with your organization's ability to react to potential aggregate out-of-space conditions, then thin provisoining volumes is fine.  If you're not confident in your monitoring and ability to react (vol move is awesome!), then use thick provisoined volumes to stay safe.  For LUN space guarantees, the same applies.  If you have monitoring and can react to move the LUN before a free space exhaustion condition occurrs, then thin provision.  One thing to note, there is no performance difference using thin vs thick volumes or LUNs.

 

You will want to keep over subscription ratios in mind, especially if you're thin provisioning VMDKs as well (thin on thin on thin).  A relatively small action (Windows updates) could ripple across the VMs and cause a massive storage utilization spike, which may be temporary (dedupe, compression will eventually catch up), but could lead to problems short term.  Again, monitoring is key.

 

I use no volume space guarantee and manage capacity at the aggregate level, using vol move operations to keep aggr utilization in check.

 

AutoSize, mode, increment, max autosize.  This largely depends on whether you expect to encounter a situation where the volume might run out of space.  With NFS, I recommend always using autosize with the default increment, max size, and the mode set to grow & shrink...but there are some caveats.  You ALWAYS want to have monitoring so you know when an autosize event happens.  Autosize being triggered is, generally, a protection mechanism so things continue to operate as expected.  This is something you want to be aware of so that you can assess the situation and determine if some additional action needs to be taken.

 

For LUNs, it's a bit different.  If you're using fractional reserve, then this shouldn't be necessary, as it will always ensure that the LUN can be written to.  You still want monitoring to check for the volume reaching 100% capacity (which will cause NetApp snapshots not to work, among other things), but it should not affect the VMs in the LUN(s) (again, assuming fractional reserve is turned on).

 

AutoSize grow threshold.  Adjust according to the size of the volume.  A 1TB volume with a 97% grow threshold means there is only 30GB left when it grows.  A 50TB volume means 1.5TB of capacity that would still be available, so 3% might not make sense.

 

Fractional Reserve.  Recent versions of ONTAP this is either on or off.  I set this to off.

 

On means that when a snapshot is taken, an amount of capacity equal to the size of any LUNs in the volume is reserved.  This means that, if you're using NetApp volume snapshots, you can't use more than 50% of the volume capacity.  This is done so that even if the volume reaches 100% used there is still enough space so that writes can continue to occurr.  This keeps the LUN online and ensures that the application doesn't know that something (potentially) bad has happened, however volume snapshots will be disabled until the used space drops down.

 

Off simply means that if the volume fills up the LUN will be placed offline until there is free space available for writes to occurr.  Avoid this by managing snapshots and capacity just like you would any

 

Space nearly full threshold.  This is simply an alert that gets triggered at the specified value.  Set at or above the autosize grow threshold (assuming you're using autogrow).

 

Snapshot auto-delete policy.  I always recommend to set to oldest first (deleting new snapshots doesn't have as significant an effect).  Evaluate whether you want it to skip user created snapshots and whether you want to exclude SnapVault or SnapMirror snapshots.

 

Snap autodelete vs volume grow first.  Evaluate your capacity, and oversubscription, and determine which is best for your organization.  Snap delete first means you lose point-in-time recoverability to keep the volume online, autogrow with high oversubscription could mean you run the aggr out of space and cause more bad things to happen.  I prefer volume grow first.

 

Space efficiency.  Always use dedupe if possible, it has a lot of benefits beyond just saving space on disk (cache tiers are dedupe aware).  Be conscious of when you schedule the post-process job to run...scheduling all volumes on the same aggr for the same time every night will most likely cause a latency spike and cause all of the jobs to take longer than expected.  Also consider when backups are running against the volume(s)...having the two of them compete may cause both to take longer than needed.

 

If you're capacity constrained, but have plenty of IOPS and CPU, then compression might be a good fit for you.

 

Remember that storage efficiency is great for saving GB, but it also leads to higher IOPS density.  This is great for flash accelerated storage (Flash Cache, Flash Pool) which will bring the more active blocks up to the cache, but it can compound performance issues if you aren't accomodating that increased IOPS density.

 

Access Time Updates.  a.k.a. no_atime_update.  Access time updates can safely be disabled (confusingly, by setting this option to true).  Not much use for last access times for VMDKs.

 

Min read ahead.  Set to false/disabled.

 

Read reallocation.  Leave disabled for volumes unless you know the VMDK(s) in the volume will benefit.  I've only used this once, a few years ago, and it was a situation where we had a single VMDK in the datastore and knew the workload well.

 

QoS.  I always put QoS policies with no threshold on all of my volumes.  This makes it super easy to characterize the workload (read/write ratios, IO size, etc.) and aid troubleshooting (qos statistics volume ... show).  If using multiple LUNs in a single volume you can put the LUNs into policy groups (but not both at the same time...either the volume, or the LUNs in it).

 

Aggregates.  Balance RAID group size for capacity vs risk.  Larger RG sizes means less capacity penalty for RAID-DP, but also increases risk due to multiple disk failures.  Make sure free space reallocation (free-space-realloc) is turned on.  

 

VMware.  Make sure the host best practices have been set, particularly for NFS datastores.  Also, if you're using SIOC, make sure that all datastores on the same aggregate have it enabled, and, ideally, there are no other workloads sharing the aggregate.  Having non-SIOC monitored workloads on the aggr can lessen the effectiveness of SIOC.  And, of course, make sure VAAI is enabled and working (install the VAAI VIB to the hosts if using NFS).

 

I put down my thoughts on aggrs and volumes for VMware some time ago here and here.  

 

If you haven't already, be sure to check out the best practices doc, TR-4333.  If you have already seen it, I would very much value your thoughts and opinions on that document...what can be done to improve it, what's missing, etc.  I'm happy to pay for your time with NetApp stickers : )

 

Please let me know if there are other settings, etc. we can help with!

 

Andrew

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
Public