ONTAP Discussions
ONTAP Discussions
Has anyone used QOS in CDOT yet? How has it worked? Any issues?
Hi,
Thanks for asking this.
For the ones who have not used QOS, please refer https://library.netapp.com/ecm/ecm_get_file/ECMP1636068 to know how Storage QoS works.
Thanks
Have you used it in a production environment? How large of one? File, block, or both?
Hi All,
This is neto from Brazil
How are you?
QoS - SVM, volumes, LUNs and files. Did many POCs showing it. Especially with VMDKs 🙂
Please let me know how I can help.
All the best
neto
Hi Neto,
Is there any way to "guarantee" a performance for VMs in cDOT when using VMFS datastores over FCP ? I believe only if we set the qos limits quite low... but that's not very effective use of resources.
My current approach is to divide overal max aggregate performance, divide it by aggr size in TB. That will give me troughput per TB. Then divide it by overprovisioning ratio. As we have 4TB LUNs/datastore then multipy result by 4 and this should give me "guaranteed" iops per lun so I can add all luns to the same qos policy.
But the result is quite low...
Is there a better way to do it?
Have anyone used qos with VMware 6 and vvols? Then I believe I should be able to set the qos per lun and limit it really per VM (where each vvol is lun on netapp side). But then each vvol might be different size so it might be difficult to assign proper policy (unless I set the same "per vm/vvol" for all of them).
If you will get few minutes maybe we might have a small chat in Berlin 🙂 ?
I'm also worried about this chapter from user guide (as this is likely to happen with VMware over FC)
How throttling a workload can affect non-throttled workload requests from the same client In some situations, throttling a workload (I/O to a storage object) can affect the performance of nonthrottled workloads if the I/O requests are sent from the same client. If a client sends I/O requests to multiple storage objects and some of those storage objects belong to Storage QoS policy groups, performance to the storage objects that do not belong to policy groups might be degraded. Performance is affected because resources on the client, such as buffers and outstanding requests, are shared. For example, this might affect a configuration that has multiple applications or virtual machines running on the same host. This behavior is likely to occur if you set a low maximum throughput limit and there are a high number of I/O requests from the client. If this occurs, you can increase the maximum throughput limit or separate the applications so they do not contend for client resources.
All,
As Richard pointed out, if QoS is to cap "greedy", then since there are no such "greedy" in our environment, or there are no performance issues that are caused by some volumes take away a lot of IOPS away from the rest of other volumes, then we don't have the need to apply QoS.
Am I correct here? Please help to confirm.
Heights -
Yes, you are correct.
If there's no contention for IOs then QOS may even be a performance bottleneck.
There's been a few post here on the community and instances I've seen on customer environs where application of QOS will induce a bit of latency.
I hope this response has been helpful to you.
At your service,
Eugene E. Kashpureff, Sr.
Independent NetApp Consultant http://www.linkedin.com/in/eugenekashpureff
Senior NetApp Instructor, Fast Lane US http://www.fastlaneus.com/
(P.S. I appreciate 'kudos' on any helpful posts.)