Tech ONTAP Blogs
Tech ONTAP Blogs
Sure, you can Storage vMotion your data from one SAN to another SAN box…. That’s fine if you want to migrate a single VM, it just takes a while. But what if you want to move a couple of datastores, what if you have to move a couple of hundred terabytes? Or a couple of hundred VMs? Then this simple task will become a burden rapidly right?
Well, what if there was a VERY quick solution for this? What if it can be done in seconds? Without the potential performance impact and risk full cutover moments? Just a s simple as 1-2-3 and done? Well… There is!
Sure, in the end, all data has to be transferred between the solutions, but as the NetApp array will take care of that in the background, there is no issue there. It’ll just take some time between the steps. But no intervention is required, no admin hassle.
The steps? 1. Add the new array connectivity to your VMware hosts. 2. Setup replication from your source to the destination array. 3. Determine your moment and do the cutover. Done! And yes, cleanup is required as well afterwards, but that’s something you can do at your own pace.
Even though it’s very easy to do this migration, it’s not going to be a 3-click migration.
Before we start, the initial configuration is showing two IP addresses for connectivity to the iSCSI datastore. As you can see below, the target with IP address 192.168.0.133 is serving the LUN, with the secondary controller in ONTAP being the redundancy, on IP address 192.168.0.132.
Note: For SnapMirror active sync to work, an ONTAP mediator is required. Setting up the ONTAP Mediator is beyond the scope of this blog, but can be found here: https://docs.netapp.com/us-en/ontap/mediator/mediator-overview-concept.html
Step 1 – Add connectivity to your new array
From a VMware perspective, this is as easy as adding a new iSCSI target to your hosts. You can use whatever method you prefer, just as long as the new array is able to serve data to your hosts. In this ase, we are using iSCSI, and the four IP addresses are shown below. The primary (current) array is serving the 192.168.0.132 and 192.168.0.133 addresses. The new array is serving the other two addresses. I’ve simply added them into the ‘Dynamic Discovery’ part in the iSCSI initiator for my VMware hosts.
Step 2 – Replicate your data
This is even more simple than you would expect. In System Manager (or in BlueXP Advanced View) go to the Protection -> Overview tab. Here is the button to “Protect for business continuity”. Once you click that, you end up in the SnapMirror active sync wizard, which is really easy to complete.
Simply select to create a new Consistency Group, which is the group of volumes (datastores) you want to migrate to the new array. In that Consistency Group, choose which volumes you want to migrate. There is a golden rule here, if a VM spans multiple datastores, make sure all volumes are in this group. Do not put those volumes in different Consistency Groups. Once you’ve created that group, simply select the Storage Virtual Machine on the destination array you want to use. If required you change the performance profile on the destination array, but let’s leave that as is.
There are two important things to consider. First of all, make sure you apply the checkbox to the ‘Replicate initiator groups’ box. This will replicate the initiator groups, making sure your ESXi hosts can successfully connect to the datastore on the new array. *NOTE: Make sure you remove this replication before deleting the relationship in step 4, or you WILL LOSE connectivity. To prevent this potential scenario, you can also create the initiator group by hand.*
Another important thing is to change the Protection Policy. While we normally use a Duplex setup for this configuration, so both arrays serve the same data simultaneously, this is not our goal now. As this is a migration, we want to select the moment of failover. To do that, we select “AutomatedFailover” as the Protection Policy. This makes sure all IO runs through the original array, until we execute the actual cutover. By default the protection will be setup once you click ‘Save’.
After you click Save, the replication is created and initialized.
On the destination array, you can see this relationship being created and initialized.
After setting up and initializing the replication in the previous step, the data will be replicated to the new array. Depending on the amount of data, this can take a while, so perhaps a small coffee or Wordle break is a good idea. Once the data is replicated, we can open System Manager (or Advanced View in BlueXP) on the new array. Go to the Protection -> Relationships menu, and verify the SnapMirror active sync relationship is healthy and in sync.
As soon as the relationship is in an ‘Healthy’ and ‘In sync’ modus, a simple rescan of the storage adapters on the ESXi hosts will reveal the new paths. Once we’ve done that, we should be seeing the new paths to the datastore in VMware, with I/O still running to the original array.
As shown in the picture above, there are now four active paths, with the original array (192.168.0.133) still serving the data.
Step 3 – Choose your cutover moment
All that’s left now, is choosing your cutover moment. This can either be smack in the middle of the day, but you can choose any time you want. It’s simply a matter of triggering the failover from the new array.
By going to ‘Protection’ -> ‘Relationships’ in System Manager (or Advanced view in BlueXP), you can simply select the relationship and select ‘Failover’.
The Failover will do more than just activate the destination array by the way. It’ll reverse the replication relationship, so the old array will now become the target for replication.
This will take a few seconds in which time the LUN is activated on the destination array and the relationship is reversed. Therefor it’ll be moved from the ‘Local Destinations’ to ‘Local Sources’ on the new array.
Looking at the ESXi hosts, we will see that the datastore hasn’t been interrupted, it’s now simply using the new IP address for it’s IO path. A simple click in the VMware GUI will show us I/O on the new array, in this case 192.168.0.143.
This is it. You’re done! Data and access is migrated from the original array to the destination array, without interruption to it’s service.
Well, then there is the clean-up left, which is as easy as the actual migration.
Step 4 - Cleanup
Cleaning up the reversed protection relationship, is very easy.
However, you can NOT simply delete the relationship, as we decided to replicate the initiator groups when creating the replication. So a single additional step is required, as shown below.
Go to the Subsection “SAN initiator groups” on the old array, which is now the destination of the relationship. Edit the initiator group which was replicated during the process to the new array. Remove the checkmark in the box which states ‘Replicate initiator group’. This will no longer replicate the initiators as part of this relationship.
*NOTE: Make sure you remove this replication before deleting the relationship, or you WILL LOSE connectivity. If you are not sure, contact your NetApp technical counterpart to verify.*
Then, simply delete the Relationship. Go to the Protection -> Relationships submenu and select the relationship for this migration. Select Delete to delete the relationship. LUN mappings will be removed, and a rescan on the ESXi hosts will show only the new array in the available paths.
Optional the original LUN/volume can be removed from the array, depending if this is a requirement for this migration.
Fun fact, you can also use this method to migrate a datastore from a high-performance A-Series array to a capacity optimized C-Series array if the higher performance is no longer required.
If you want to try this in a controlled environment, we’ve got a LabOnDemand setup for you, with the following configuration. Find it here: https://labondemand.netapp.com/lab/symmetricaa-hol