I'm about to implement VMware SRM between two data centers with NetApp storage at both sites. Does anyone have any experience, good or bad, with SRM installation, deployment or configuration that will help me avoid any pitfalls?
we are on the same point where you started about two years ago, can you shed some light on how was the implementation ???
Did it work well? Did it Work at all?
My main concern is that we need to replicate a 1TB volume from our main site > in Austin to our DR site in California.
The replication (snapmirror rate) is about 11G per hour over a wan acceleration (riverbed) and I worry that it would take to much time to replicate changes. I would prefer having the possibility to replicate ONLY the relevant guest machines , without changing my current data stores. Does anyone know such tools?
Definitely not supported, I have used this trick for a while My key concerns with it are that a USB drive can drop bits though (but so could tape) and any blocks on a NetApp controller are RAID_DP protected, lost write protected, checksumed, etc... but not the image file to other media. So a swing system is always our recommendation...but when not possible.
Some use cases we have seen... like you said where no tape an no swing system (although as mentioned in the forum you need room to write the base mirror as a single file). Other cases are if you have a FAS270 swing system that doesn't support dedup (or a swing system that doesn't support dedup the size of the source system) then you can smtape from the source to a file on the swing system (even if over the dedup size of that system since it write a snapmirror image - so you can still use a controller that doesn't support all the source features when written to an individual file... aka using an older fas like a usb drive) then copy that image back on the target (need 2x the space to do that on each side to hold the image). Another use case is if ndmp backup is so slow (millions of files) and the backup software doesn't support smtape (or if nfs/cifs backup is slow) you can image the volume to one file then back it up (but no single file restore...but still a way to backup with a more painful restore).
I'll post my example from the vsim but with the new smtape commands since store and retrieve were a part of 7g ontap when the other post was listed.
Example with the 8.1 7-Mode VSIM
# create the test volume
vsim-7m-1> vol create test aggr1 2g
# write a file to the volume just to show it arrives on the target
vsim-7m-1> wrfile /vol/test/file1
# create the smtape target volume (could be any volume with room though)
How about shipping the DR kit to the primary site & doing baseline replication over LAN? Or, if this isn't feasible, how about a swing filer (3rd box) for baseline transfer from primary to it, then again from swing kit to the DR?
It is hard to beat the bandwidth of the moving truck carrying lots of disks!
Igal, did you get an answer to this problem? I'm curious to hear more about your network - capacity, latency, other applications sharing the network. There might be away to use the network more efficently.
You mention an ISP and alarms go off in my head. What's your packet loss rates like?
So....that's a rather big question as involves a lot of moving parts. Some questions to ask (or think about) are...
NFS, iSCSI, or FCP? That particularly involves overhead needed for SnapMirror (as LUNs requires 2x + delta by default....can do thin provisioning there with lower the fractional reserve but need to understand how thin provisioning works -- fractional reserve, snap auto-delete, volume auto-grow, etc.)
Licenses -- you need SnapMirror and FlexClone at a minimum.
VM's on multiple controllers of an active/active pair -- basically not supported. A protection group in SRM can't have VM's with vmdk's on multiple controllers.
VM's spread between iSCSI and NFS (most common if implementing SnapManager for SQL/Exchange and the data disks live on FC/SAS disk and connected via iSCSI while the OS/swap may live on SATA and connected via NFS).