2011-08-04 10:22 AM
I've read a few of the design guides around MetroCluster and I can't seem to find clear language around local writes at each DC. I'm trying to compare the offereing to VPLEX which provides local writes to a LUN on both sides and handles the syncs somehow (magic - I'm relativly new to the storage world).
Can I provision one LUN with MetroCluster and contain the write IO from hosts in each DC to their respective DC?
2011-08-08 08:19 AM
I think I can help here. Conceptually MetroCluster is aggregate level mirroring. So the aggregates that contain volumes, qtrees, and luns are mirrored wholesale without any intervention required by the administrator once the storage pools are setup properly. The answer to your last question is "no" because you always speak to "local" and "remote" data in metrocluster. In a sense doing what you are describing is anathema to the purpose of MetroCluster. Essentially half the disks from one DC are assigned to one controller and the other half to the alternate side. Then the mirroring happens to the other side based on these pools. The writes happen locally then get committed to the other side over FC san so there is a requirement that both writes succeed or the cluster is invalidated and recovery operations become necessary. Does this make sense? Not sure if this explanation was great so let me know if it seems unclear.
2011-08-09 10:31 AM
So if both writes need to be confirmed (one at each DC) and if the same is true for VPLEX, then does it matter what order the operations happen in? Your write speed is constrained by latency between DCs in either case. Is that true?
2011-08-09 12:26 PM
You are correct - it doesn't matter what order they happen in, although I think that the local node writes data and then it gets mirrored to the remote node. And yes - as in any synchronous mirroring operation your latency is influenced by the single mode fiber connection and switching overhead. I can tell you from personal experience with MetroCluster - it just works, and the latency is not an issue, even in special testing beyond the established technically supported limits.
If you don't care about synchronous writes then asynchronous snapmirror would do the job nicely.