The transition to NetApp MS Azure AD B2C is complete. If you missed the pre-registration, you will be invited to reigister at next log in.
Please note that access to your NetApp data may take up to 1 hour.
To learn more, read the FAQ and watch the video.
Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.

ONTAP Discussions

How to migrate 32bit volume with qtree and non qtree data to 64 bit volume

JLundfelt

Hi

 

I have a 11TB CIFS based NTFS volume with 4 qtree's on 7.3.7 controller, that I need to migrate to another controller, the problem I am having is that

 

if I use QSM-

 

Invoke-NaSnapmirrorInitialize -Source 172.16.1.1:/vol/Groups/- -Destination irv-gdc-san1a:/vol/Groups/qtree

 

I only get the base contents of the volume, and not the other qtree's that are co-mingled with folders in the base of the cifs share local path (/vol/Groups)

 

if I use VSM-

 

The volume would be replicated with all data / qtree's. but would remain 32bit.

 

Part of the issue I have is that this is over 10TB / 10 million files. I already have the QSM relationship for the base contents of the volume. If I setup additional QSM's, they would be adjacent to the base contents, and a single cifs share could not access. As part of the cutover plan, I am going to be aliasing the IP address since we have both Windows and Unix clients accessing this data from god knows where. Anyone have any suggestions? I am trying to make this a single cutover event, where the outage is less than an hour, and from previous expireance, if I move the contents from the base QSM  (irv-gdc-san1a:/vol/Groups/qtree --> irv-gdc-san1a:/vol/Groups) it could take hours, and I wouldn't be able to maintain permissions unless I used robocopy with the /MIR and /SEC switches.

 

Any suggestions? TIA!

-jon

1 ACCEPTED SOLUTION

dirk_ecker

The VSM destination vol won't change until you break the snapmirror relationship.

 

For testing purposes you can do the following:

 

- create a new vol on the source 7.3.7 controller

- set up a VSM relationship to the FAS3140

- running "vol status" should show 32-bit now

- break the relationship

- wait a few minutes, then run "vol status" once again

 

 

View solution in original post

23 REPLIES 23

JLundfelt

Yeah, I just opened a case. I think the source controller is definately busy, although its not as simple as just the CPU utilzation. Trying to do a snap list, let alone a snap delta against the source volume in question for the past 30 minutes, and am still waiting, and cross fingers that it doesnt panic the controller. I can't enumerate the snaps with the shell, PowerShell Toolkit, or Filerview (source controller is FAS3140 running 7.3.7) and will hopefully be decomissioned as soon as I can get a defined cutover schedule.

 

Thanks,

Jon

richard_payne

Ouch, that sounds like one busy toaster. If it's not CPU it could easily be disks - sysstat -x 1 will give you some idea (though the Disk util is the busiest disk not an average).

 

If you're running dedup you may also want to look at schedules. I know Netapp calls it a background process, but it still drives I/O on the disks and we sometimes see an improvement in filer performance by stopping the scans until the load dies down.

 

--rdp

JLundfelt

So no dedupe, here are the metrics-

 

IRV-SAN-NA1*> vol status Groups
Volume State Status Options
Groups online raid_dp, flex maxdirsize=20971, guarantee=none,
Containing aggregate: 'aggr3'


IRV-SAN-NA1*> df -g Groups
Filesystem total used avail capacity Mounted on
/vol/Groups/ 8243GB 6986GB 1256GB 85% /vol/Groups/
/vol/Groups/.snapshot 3532GB 3342GB 189GB 95% /vol/Groups/.snapshot

 

IRV-SAN-NA1*> wafl scan status Groups
Volume Groups:
Scan id Type of scan progress
28 active bitmap rearrangement fbn 68485 of 105086 w/ max_chain_len 5
1151663 container block reclamation block 6337 of 105087 (fbn 43637)
1151664 block ownership calculation block 12495 of 105086

 

Everything was working after the initial baseline-

 

dst Mon Jul 20 16:29:57 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Initialize)
dst Mon Jul 20 16:30:17 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Thu Jul 23 19:00:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (10752351408 KB)

 

and the first differential-


dst Fri Jul 24 00:10:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 00:10:20 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 13:55:23 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (44940072 KB)

 

and then it was running all weekend every 10 minutes-


dst Fri Jul 24 13:56:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 13:56:21 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 15:53:05 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (2257532 KB)
dst Fri Jul 24 16:00:21 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 16:07:52 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (2129884 KB)
dst Fri Jul 24 16:13:37 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 16:15:07 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (81048 KB)
dst Fri Jul 24 16:20:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 16:20:22 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 16:21:24 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (39652 KB)
dst Fri Jul 24 16:30:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 16:30:19 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 16:31:21 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (84688 KB)
dst Fri Jul 24 16:40:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 16:40:15 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 16:41:38 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (57788 KB)
dst Fri Jul 24 16:50:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 16:50:19 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 16:53:42 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (767740 KB)
dst Fri Jul 24 17:00:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 17:00:17 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 17:01:40 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (70368 KB)
dst Fri Jul 24 17:10:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 17:10:15 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 17:10:58 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (45224 KB)
dst Fri Jul 24 17:20:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 17:20:13 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 17:20:40 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (27008 KB)
dst Fri Jul 24 17:30:01 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 17:30:13 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 17:30:40 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (50456 KB)
dst Fri Jul 24 17:40:01 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 17:40:16 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 17:40:37 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (20816 KB)
dst Fri Jul 24 17:50:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 17:50:16 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 17:50:43 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (1952 KB)
dst Fri Jul 24 18:00:02 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 18:00:15 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 18:00:29 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (1344 KB)
dst Fri Jul 24 18:10:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 18:10:22 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 18:10:35 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (644 KB)
dst Fri Jul 24 18:20:01 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 18:20:15 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 18:20:27 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (3504 KB)
dst Fri Jul 24 18:30:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)

 

Until it started having larger deltas / transfer times-


dst Fri Jul 24 18:30:09 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Fri Jul 24 23:47:12 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (31132 KB)
dst Fri Jul 24 23:48:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Fri Jul 24 23:48:17 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Sat Jul 25 06:51:44 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (277864 KB)
dst Sat Jul 25 06:52:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Sat Jul 25 06:52:22 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Sat Jul 25 10:14:58 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (267992 KB)
dst Sat Jul 25 10:15:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Sat Jul 25 10:15:18 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Sat Jul 25 15:10:28 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (197624 KB)
dst Sat Jul 25 15:11:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled) 

dst Sun Jul 26 20:10:14 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Sun Jul 26 20:10:34 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (424 KB)
dst Sun Jul 26 20:20:01 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Sun Jul 26 20:20:15 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Sun Jul 26 20:20:29 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (1600 KB)
dst Sun Jul 26 20:30:02 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Sun Jul 26 20:30:19 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Sun Jul 26 20:30:33 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (6424 KB)
dst Sun Jul 26 20:40:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Sun Jul 26 20:40:08 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Sun Jul 26 20:40:20 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (5776 KB)
dst Sun Jul 26 20:50:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Sun Jul 26 20:50:09 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Mon Jul 27 00:59:01 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (13704 KB)
dst Mon Jul 27 01:00:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Mon Jul 27 01:00:19 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Mon Jul 27 06:13:28 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups End (318104 KB)

 


dst Mon Jul 27 06:14:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Request (Scheduled)
dst Mon Jul 27 06:14:14 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Start
dst Mon Jul 27 11:01:00 PDT 172.16.1.1:Groups irv-gdc-san1a:Groups Abort (replication transfer failed to complete)

 

Cannot do a snap list, snap delta or delete snaps on this volume, the first two time-out the latter produces-

 

IRV-SAN-NA1*> snap delete -V Groups '2015-07-28 09_00_11 hourly_IRV-SAN-NA1_Groups'
Snapshot 2015-07-28 09_00_11 hourly_IRV-SAN-NA1_Groups is busy because of snap delta

JLundfelt

Just a quick update. I was ab;e to resove the issue after opening a NetApp case, and getting three layers deep with tech support. They gathered a bunch of data per usual, but what ended up finally fixing the issue, where the source volume was no longer 'busy' and giving me the snap delta, in use warning when trying to delete a snapshot, and 5-6 hours for snapmirror to even start transferring data, or fail, was when I stopped the snapmirror service on the target controller. As soon as I did that, the source created a new snapvault dependent snapshot, and removed the dependencies on the old snapshots, and cleared the remaining from being 'busy'

 

Thanks for everyones help.  Have a good weekend  😃

-Jon

JLundfelt

So one of the only issues I am having with this ~10TB VSM is that even though the baseline completed in about 72 hours, the differentials, which are scheduled to run every 10 minutes-

 

Source                      Destination               Minutes               Hours DaysOfWeek DaysOfMonth
------                         -----------                    -------                 ------- ----- ---------- -----------
172.16.1.1:Groups   irv-gdc-san1a:Groups 0,10,20,30,40,50   *         *         *           *

 

End up taking anywhere between 5-12 hours to complete each transfer, and during that time the transferprogress shows no data being transferred-

 

BaseSnapshot : irv-gdc-san1a(0151769424)_Groups.49
Contents : Replica
CurrentTransferError :
CurrentTransferType : scheduled
DestinationLocation : irv-gdc-san1a:Groups
InodesReplicated :
LagTime : 30597
LagTimeTS : 08:29:57
LastTransferDuration : 18789
LastTransferDurationTS : 05:13:09
LastTransferFrom : 172.16.1.1:Groups
LastTransferSize : 325738496
LastTransferType : scheduled
MirrorTimestamp : 1437984008
MirrorTimestampDT : 7/27/2015 1:00:08 AM
ReplicationOps :
SourceLocation : 172.16.1.1:Groups
State : snapmirrored
Status : transferring
TransferProgress : 0
InodesReplicatedSpecified : False
ReplicationOpsSpecified : False
Source : 172.16.1.1:Groups
Destination : irv-gdc-san1a:Groups

 

I though that VSM was faster since it didnt operate at the file/inode level, but at the block level. Is this simply an issue of the size of the source volume, and the snapmirror process taking hours to scan for changes? The current source volume is protected with DFPM, and has hourly snapshots, and nightlight snapvaults to another off-site NetApp. Any thoughts on this, or should I just open a support case since this is going to severly limit the control over the timing of the cutover.

 

Thanks,

Jon

scottgelb

What is the change between snapshot updates using the "snap delta" command?  Also the output of "snapvault status -l" to see the detailed listing.  Also could be something in options replication.throttle if set at the controllers or something in the Protection Manager job.  From the controller side it would be good to see the output of those 3 commands on both sides.

richard_payne

I would open a case, yes. How busy are the two controllers?

 

I would expect VSM to be much faster than QSM in this case....

 

--rdp

richard_payne

"if I use QSM-

 

Invoke-NaSnapmirrorInitialize -Source 172.16.1.1:/vol/Groups/- -Destination irv-gdc-san1a:/vol/Groups/qtree

 

I only get the base contents of the volume, and not the other qtree's that are co-mingled with folders in the base of the cifs share local path (/vol/Groups)"

 

So the '-' will only get the non-qtree data, yes, but you could also setup QSM for the other qtrees (4 I think you said). So you'd end up with 5 qtree snapmirrors. This would also give you the advantage of converting the non-qtree data into a qtree.

 

I see you're going down the VSM route (which also makes sense)....but I don't see the problem with using QSM for this, maybe I'm missing something?

 

--rdp

JLundfelt

The problem is with the way the cifs share points at the root of the volume, and there are qtree's with data as well. If I moved them all w/ QSM, the folder heirarchy wouldn't work-

 

> get-naqtree *groups*

Volume Qtree Status Security
------ ----- ------ --------
Groups normal ntfs
Groups Programming normal ntfs
Groups Information Services normal ntfs
Groups Product Design & Dev normal ntfs

 

> Get-NaCifsShare Groups

MountPoint        ShareName   Description
----------            ---------            -----------
/vol/Groups        Groups           Document Storage

 

So the local share mask would need to point to the base of the volume, where what used to be the folder in the root of the volume would be in a qtree.

richard_payne

I see, and the users are expecting the 4 qtrees to be under the top level share....makes sense.

 

--rdp

aborzenkov
Yes, that's possible. Do not forget, that in-place conversion without increasing aggregate size beyond 16T is officially supported starting with 8.2.1 if my memory serves me right.

JLundfelt
What about just migrating the volume a second time to another aggregate with VSM once on target controller? So 32bit volume on single 7.3.7 controller migrated with VSM to 32bit volume on target controller, then subsequent VSM / 64 bit conversion?

aborzenkov
If you could not take an outage, you would not single controller. If you update source to 8.1 you can use VSM from 32 to 64 bit.

JLundfelt
I should clarify snapmirror target is HA pair of 3140's running 8.1.7p4, and as an end result I am trying to get everything to 64 bit volumes.

Thanks,
Jon

dirk_ecker

Hi Jon,

Is the destination aggregate on the FAS3140 a 64 bit aggregate? If so, you can simply perform a VSM migration from the 32 bit source aggr, the volume will be expanded to 64 bit automatically after breaking the snapmirror relationship.

 

If the destination aggregate is 32 bit and the FAS3140 controllers are running 8.1.4P4 or later you can perform an in-place aggregate extension without adding disks. Check out http://www.netapp.com/us/media/tr-3978.pdf page 7 for details.

 

I hope this helps!

 

Cheers,

d.

aborzenkov

You are right of course. I do not know whay I was sure we need 8.1 source to perform 32 to 64 bit VSM; it is only reverse sync that requires it. Thank you!

JLundfelt

Destination controller is 3140 running 8.1.4P8, with 64 bit aggregates. I swear I have seen VSM destinations stay 32 bit like their source, but I guess I dont have a VSM to test that theory with. Either way it sounds like the approach  I am going to have to take as at least a first step.  At least I know not to use QSM, since it can't keep the folder heirearchy in my scenario.

 

 

Thanks!

-Jon

AdvUniMD

VSM destinations will only be converted to 64bit after you break them.

dirk_ecker

The VSM destination vol won't change until you break the snapmirror relationship.

 

For testing purposes you can do the following:

 

- create a new vol on the source 7.3.7 controller

- set up a VSM relationship to the FAS3140

- running "vol status" should show 32-bit now

- break the relationship

- wait a few minutes, then run "vol status" once again

 

 

View solution in original post

aborzenkov

I swear I have seen VSM destinations stay 32 bit like their source

Conversion starts after snapmirror break; did you do it?

Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public