(Cross-posted on the Toasters list as well).
We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typically we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks,
Ray
[1] Controller is obviously still a shared resource.