ONTAP Discussions

Spread iops / cpu load across aggregates

daVikes
3,125 Views

Looking to see how people are spreading the load out between controllers and aggregates?  Are you doing that per volume?  how are you determing load?  # of iops per volume?  how would you get avg. iops over the past couple of weeks or months?   What software are you using to determine volume load on the system?  

 

 

Starting to wonder if there is some tool like vRealize Ops that can redistribute the load taking into account the last 6 months of saved data to get the best performance out of the system using DRS but would be able to do that via vol moves on the Netapp Storage side to distribute load between controllers and aggregates?

 

Thanks!

4 REPLIES 4

colsen
3,108 Views

Hello,

 

We're using a combination of OnCommand PM as well as the vSphere tools to get an idea of where are heavy-hitters are.  When we migrated our ESXi 5.x/7mode over to ESXi 6.x/ONTAP (using storage vMotions) we tried to distribute guest workloads evenly across our various tiers of datastores (i.e. SSD/SAS and SATA).  That said, we threw a FlashPool controller setup into the mix so now we're back to re-establishing those tiers based on IOPs/node utilization/etc.

 

Anyway, for us, an automated vol move utility would be sort of ugly since we've got 25TB datastores (NFS) presented to our ESXi farm.  The moves take a couple of days in the best case scenario, so it's not something we'd want to have going on all the time.  Our approach at present is to have multiple datastores spanning the different aggregates/node and then have our VMware folks try to balance guest workloads across them based on occupancy.  Then, when we identify a guest that is off the charts, we’ll have them move that one up to SSD (or contact the admin and see if something is wrong with the system – too little memory, run-away process, etc).  Having those moves done in DRS would obviously kill us in snaps (if machines were constantly moving from one datastore to the next).

 

That said, I somewhat regret that we weren’t able to implement a “flash first” approach – then we wouldn’t have to worry about guest workload contention so much and the compression/compaction efficiencies are really impressive.  The other somewhat manual process we’ve been using lately is pulling the:

 

set advanced

statistics top file show

 

and then you can see if a particular VMDK is going bonkers.

 

Hope that helps!

 

Chris

daVikes
3,099 Views

Thanks for the file commands that will help a bit.  Was doing it more at a volume level that will give me a little more information on our nfs datastores.   Our datastores are still 5tb but your right lager datastores would definally take longer to vol move around and I suppose you still need available space to do that so that could be a challange.   I would like to think there would be a way to do that to make sure iops is even throughout our various aggrs.   Yes we also have volumes on all of the aggregates to try and spread them out but with various VM workloads its difficult to go down that one more level from a storage perspective to keep things even.  Our VMware admins are doing SDRS just based off of space precentage before they move things around not really taking iops into account.    We have flash pools so hybrid aggregates currently but no SSD pools by themselfs.  Eventually we will be going to AFF's but not for a while.  That will help some things I suppose and probably mask other things.

 

 

colsen
3,067 Views

Good luck - sounds like you've got most of the big rocks covered in your design.

 

If you're not already running it, definitely implement OCUM (and 7.2 is just around the corner with converged UM and PM).  The tool isn't 100% of what we had back in the 7mode/DFM days, but it's getting there.  I've gotten to the point where I block off some time each week just to look at "top volumes" and seeing if anything is making the top list that wasn't there previously.  A lot of times it's just increased workload, but we have sleuthed out mount issues with Oracle atime/noatime settings that were generating 10K+ "Other" IOPs needlessly.

 

We're pretty happy so far with our FlashPools - our initial benchmarking showed performance on-par with our SAS aggregates (and sometimes faster).  Rumor has it that NetApp is working on some more dynamic workload reallocation technologies, so maybe a lot of this "looking at the matrix" stuff trying to find performance balance Zen goes away...

daVikes
2,990 Views

" Rumor has it that NetApp is working on some more dynamic workload reallocation technologies"

 

 

I think that would be great as it's much more efficient to let the storage move the bits around vs. Storage DRS and Storage Pools with VMware unless its doing that all with the Storage APIs.  The SVM concept is good, I think it can still be much improved though to optimize what is already there.  So lets hope they have some great new features inbound.

Public