cDOT and Vol Move

On clustered Data ONTAP (cDOT), you can do volume move (vol move) not only from one aggregate to another within a node, but also from an aggregate inside node1 to a destination aggregate inside node2. That is to say, you can do vol move across the node boundary; and do so non-disruptively.

Below, I’ll use a very simple example to illustrate how this can be done.

Let’s say we have a two-node cluster, running version 8.2, with aggr3 inside node1 and aggr4 inside node2 (see Figure 1). Both aggr3 and aggr4 are 64-bit aggregates. We also have a Vserver, myVserver, which uses aggr3 and aggr4 (see Figure 2). If you need to know the details of how this step is done, please follow the link here.

     Figure 1. Two aggregates: aggr3 in node1 and aggr4 in node2.

     Figure 2. A vserver: myVserver, uses both aggr3 and aggr4.

Next, let’s create a 10GB FlexVol volume, vol_10g, on aggr3 and create a 8GB LUN, my8GB.lun, inside vol_10g. We then map the LUN to a Windows host, in this case, stlrx300s6-187, via an iSCSI LIF (see Figure 3). Note, the detail steps of how to create an iSCSI LIF can be found here.

     Figure 3. The mapping of the 8GB LUN inside vol_10g.

Now, on the Windows host, we format the 8GB LUN and assign a drive letter Y: to it (see Figure 4). In order to demonstrate the non-disruptive aspect of the vol move operation, let’s run Iometer while doing vol move. In our case, the Iometer test file is 5GB in size. The IO pattern is 70% read, 30% write, 100% random. And IO transfer size is 8K. One more thing, let’s use perfmon to monitor the whole process. And we are ready to go.

     Figure 4. The formatting of the 8GB LUN.

First, we start Iometer and let it run for a couple of minutes. Then, we initiate the vol move process (see Figure 5). In this case, we move vol_10g from aggr3 inside node1 to aggr4 inside node2.

     Figure 5. The vol move operation.

Figure 6 is a perfmon log chart that captures the entire 10-minute Iometer run. It shows that Iometer was running before vol move had been started, and kept running during and after the vol move operation.

     Figure 6. The perfmon log showing the 10-min Iometer run and the 2-min vol move.

The vol move operation took about two minutes to complete, from 9:06:42 PM to 9:08:42 PM. Figure 7 shows a zoom-in view of how Iometer was running during the vol move process. Note that there is a brief period (about 10 seconds) where there was no IO. This is expected, because there is a brief IO “freeze” during the vol move process in order to transfer the identity of the source volume to the destination volume. 

     Figure 7. The zoom-in view of Iometer running during the vol move operation.

Figure 8 shows the final status (successful) of the vol move operation, together with the destination where vol_10g ended up with, that is, aggr4 in node2.

     Figure 8. The final status of the vol move operation.

Thanks for reading.



Do you know if there's a way to throttle the vol move operation so it doesn't overwhelm either the source or the destination aggregate?  I've tried QoS but no success.

QoS policy – but I’ve read somewhere else during the last days that the QoS policies do only apply to client initiated workloads, not to system initiated workloads.