2016-06-09 06:13 AM
2016-06-09 06:36 AM
IMHO an SVM is a management container to be used on data under the same authentication sources (AD, LDAP, etc.) with the same storage managers. With application/script access, sometimes it becomes necessary to create separate SVMs to avoid administrative access issue - i.e. if your DBA has access to create, clone and destroy volumes on their SVM, you probably don't want to give them these same rights on your Exchange data and vice-versa.
If you have a need to report and monitor on application-based groups of storage, I suggest using OnCommand Insight. OCI will let you annotate your storage by application to give you show/charge-back, capacity, performance and even performance anomaly detection across storage AND (virtualized) hosts.
You are correct that actual performance and capacity management and troubleshooting isn't done at the SVM level, it's done at the hardware level. The exception here might be for software defined versions of ONTAP (Cloud or Edge/Select), where you may not have access to the underlying hardware statistics but instead have a "bucket" assigned to your SVM. If this is the case, you would have to take the SVM-level stats to your underlying provider (AWS, Azure, or even your server/blade/network teams) when troubleshooting. I will point out that OCI also has some ability to work with cloud providers, but I don't have experience using it for this yet.
2016-06-10 03:30 AM
Very well said.
As NetApp engineer not alwasy can accept Manager's perspective, when he see the pretty performance graphs based SVM, his immediate reaction would be to divide applications by SVM's, these graphs to me are very much not too useful. They and superficial, not mention what should be true reasons to use SVM's.
As you said, we use SVMs in the situation of different authentication, different domain, or have different administration roles. None of these are cases here.