Do bear in mind that the QoS policies can only cap "greedy" workloads (by placing limits on volume of data transferred or # of IOPs). What they cannot do is guarantee a "clear road" for traffic to/from certain volumes. They certainly do work, though, and if you have the situation where one or two workloads are much "busier" than the others (perhaps due to coming from newer, faster servers) then the QoS mechanism can throttle them back to allow other workloads to get their share. Regards, Richard.
Hi Neto, Is there any way to "guarantee" a performance for VMs in cDOT when using VMFS datastores over FCP ? I believe only if we set the qos limits quite low... but that's not very effective use of resources.
My current approach is to divide overal max aggregate performance, divide it by aggr size in TB. That will give me troughput per TB. Then divide it by overprovisioning ratio. As we have 4TB LUNs/datastore then multipy result by 4 and this should give me "guaranteed" iops per lun so I can add all luns to the same qos policy.
But the result is quite low...
Is there a better way to do it?
Have anyone used qos with VMware 6 and vvols? Then I believe I should be able to set the qos per lun and limit it really per VM (where each vvol is lun on netapp side). But then each vvol might be different size so it might be difficult to assign proper policy (unless I set the same "per vm/vvol" for all of them).
If you will get few minutes maybe we might have a small chat in Berlin 🙂 ?
I'm also worried about this chapter from user guide (as this is likely to happen with VMware over FC)
How throttling a workload can affect non-throttled workload requests from the same client
In some situations, throttling a workload (I/O to a storage object) can affect the performance of nonthrottled
workloads if the I/O requests are sent from the same client.
If a client sends I/O requests to multiple storage objects and some of those storage objects belong to
Storage QoS policy groups, performance to the storage objects that do not belong to policy groups
might be degraded. Performance is affected because resources on the client, such as buffers and
outstanding requests, are shared.
For example, this might affect a configuration that has multiple applications or virtual machines
running on the same host.
This behavior is likely to occur if you set a low maximum throughput limit and there are a high
number of I/O requests from the client.
If this occurs, you can increase the maximum throughput limit or separate the applications so they do
not contend for client resources.
As Richard pointed out, if QoS is to cap "greedy", then since there are no such "greedy" in our environment, or there are no performance issues that are caused by some volumes take away a lot of IOPS away from the rest of other volumes, then we don't have the need to apply QoS.