We are virtualizing an EMC Clariion SSD array with our NetApp v3240. On the Clariion, all disks are in a single Clariion raid group and the raid group is carved into 16 LUNS. The Clariion LUNs are presented to our Netapp v3240. We then created a NetApp aggregate made up of the Clariion LUNs. The NetApp aggregate is made up of two 8 disk raid groups.
Is it OK to have a 2 raid group NetApp aggregate that is made up of LUNs from a single raid group on a 3rd party array?
The best practices guide says to use a dedicated array raid group, which we are doing, but doesnt appear to mention anything about how those luns should be destributed within the NetApp aggregate (single raid group, multiple, doesnt matter?).
With conventional spinning disks, having LUNs from the same aggregate in the same RAID group will lead to performance issues. We are striping our writes across all the devices in the aggregate at the same time. If any of the devices share the same spindle, you would have a hot spot, or congestion point due to disk contention. This isn't much concern for SSDs, obviously. But your best bet for spinning disks is to have two LUNs per RAID group, one to each V-Series controller. This way no aggr will be limited unnecessarily by disk IO.
Thanks for the response. Just to clarify, I am asking if we need a unique raid group on the NetApp for each raid group on the array. Is the diagram below ok? The best practices dont seem to touch on how the 3rd party array luns should be distributed within a single netapp aggregate. In other words, can the luns from a single 3rd-party-array-raid-group be placed in multiple NetApp-aggregate-raid-groups?
I'll throw this out there since you have been so helpful. We are using a v3240 to virtualize a Clariion CX4-120. The CX4 has a 500mb write cache. Would there be any benefit to disabling the write cache on the Clariion? One of our suspicions is that NVRAM is filling up the Clariion's write cache which causes the Clariion to go into force flushing which in turn causes latency. The thought is that disabling the Clariions small write cache may improve things.
I looked for a best practice regarding write cache on the 3rd party array but couldnt find anything.
If you are only writing to that SSD-backed aggr, then absolutely! Writing to those SSDs ought to be fast enough.
And just to clarify, since this is a rather common misunderstanding; we only *log* writes to NVRAM. There is no user data in NVRAM, just enough data that we can rebuild the pending writes should power be lost. What we are writing from when take a consistency point (the CP in sysstat) is system memory, not NVRAM.
Daniel Isaacs Technical Marketing Engineer V-Series
And thanks for the clarification on NVRAM. I've totally been using NVRAM in the wrong context for quite some time. So, when we I hear references to "two buckets" of NVRAM, they are really referring to "two buckets" of system memory. Is that correct?