VMware Solutions Discussions

Mapping/Mounting external resources after failover

glen_eustace
3,112 Views

We are beginning our SRM design/deployment and I am hoping that what I am asking is a dumb question, but even if it is I cannot find an appropriate answer.

All of our production data is on an N-Series 6040 and is being mirrored to another 6040 at another site.

We aim to use SRM to move all the VMs to this second site as well, in the event of a disaster.  As far as we can see, SRM takes care of breaking the replication relationships and mounting the mirrored datastores at the other site on the ESX cluster and will bring up the VMs.  This seems all fine when the data these VMs use is also in the VM environment, i.e. in VMDK files etc.

My issue is what about the CIFS shares and NFS mounts that are not controlled by VMware.  They have been replicated to the other site but the filer IP addresses are different and the mirrored volumes were not necessarilly even called the same thing - I have gone through and renamed all the mirrors ( DFPM's conventions make the names all different :-).

It would seem that we will need to add some scripting to break these mirrors so that the volumes can be mounted but how do we get the VM to mount them ?

I am hoping this is a scenario that is common and there is a well-known solution/process.  What are others doing with SRM ?

The other but similar issue is with database ODBC connectors.  Our SQL cluster and is failover partner are still on physical hardware ( the LUNs are on N-Series hardware ). The VMs once moved with SRM need to access databases that are on a different server.

Any suggestions or pointers to appropriate literature much appreciated.

3 REPLIES 3

glen_eustace
3,112 Views

After much head scratching, the solution I believe we will implement is as follows.

1. Create a new private (non-routed) storage network at each site that uses the same network at both sites.

2. Add a 2nd NIC to all the VMs that are to be included in SRM protection groups and mount N-series shares or volumes. Reconfigure the servers to use the new network for access to the storage.

3. Add another vlan to the filers at both sites ensuring that filers have the same addresses on the new private network i.e. Site1 A shares the same address as Site2 A.

4. Provided volumes and shares have the same name at Site 2 when they are needed, mounts should succeed as the VM servers don't know they are not at the primary site.

We have our primary volumes names 'something_t1a', the secondary is then 'sm_something_t1a'.  As part of doing an SRM test, we intend to clone 'sm_something_t1a' to 'something_t1a' at site 2.  After the test we can then delete the cloned volumes. The private storage network will be included in the SRM Test Bubble.

In the case of a real disaster, we would break the mirrors and rename the secondary volume from  'sm_something_t1a' to  'something_t1a' at site 2.

We haven't tried any of this yet but in theory, it should do what we want ( I hope ).

aborzenkov
3,112 Views

Another possibility is to use the same volume names on both sites (which makes it easier in real disaster case). For testing you would clone something_t1a into test_something_t1a and then export as

exportfs -o actual=/vol/test_something_t1a /vol/something_t1a

For CIFS you would export cloned path instead of original (under the original name).

The idea with private storage is quite clever!

glen_eustace
3,112 Views

Being relatively new to N-Series/NetApp, it is annoying that one doesn't know what one doesn't know !!

I was aware that the CIFS share could be mapped to a volume that didn't have the same name as the original, I wasn't aware that using -actual could do the same trick with NFS.

We will need to give some thought to which should use the original name, the test or the failover.  At the moment, having sm_ on the front of the volume names very quickly identifies the snap mirror volumes at Site 2.  We have used fc_ as a prefix as well to identify FlexCache volumes easily having already had a case where someone was mounting the wrong volume.

Our design team will need to bat this one around a bit to see which provides the best solution.

Public