Subscribe

Can We Consolidate Many FlexVols into one large FlexVol?

The New York Community: 

Here is a question that I cannot get answered, as yet.  Don't know the reason why.  In this ecomony, some of us technologist hands are tie to the borocracy of the "One" the answer to the Rulers of the Organization/Company.  Like the Father in the House, they set the tone, the money, and the policitics.  Someone has too.  That said,  hear is an attempt to explain the situation and question in a much smaller scale.

This is what I mean -

This is NetApp NAS Production Cifs/shares of home directories and share departmential resources...etc.  We have two NetApp Filers in a primary site called Manhattan, New York and one NetApp Filer in a protected site in the Bronx, New York, Metropolician Area Network.  The primary site is clustered.  The protected site in not clusted.  Snapshots are replicated asynchronized from the primary site to the protection site.

Storage Specs are the following: The aggregate available space is 1.6TB on the Bronx, New York protected site) and 1.8TB on the Bronx, New York protected site.

Manhattan, New York (primary site)

1 raid group, 1 aggregate, 4 flexVols, 4 qtree security ntfs file system,  All of the flexVols are config with  sis, autodelete, guarantee=volume

flexVol1/vol/dxxdsang

flexVol1/vol/dxxdsangg

flexVol1/vol/dxxdsanggg

flexVol1/vol/dxxdsangggg

Bronx, New York (protected site)

1 raid group, 1 aggregate, 2 flexVols, 2 qtree securtiy ntfs and a snapmirror snapshot, read-only..

flexVol1/vol/dzzdsang

flexVol1/vol/dzzdsangg

flexVol1/vol/dxxdsangggg snapmirror destination replic/delta snapshots from the primary site,

Becomes of the deduction in human resources (technical professional staff), we want to consolidate those flexVols in the primary site into one large volume.   The question, what is architect needed to consolidation the 4 flexVol on the primary site into one large flexVol, all being concerned?

Netappsky.com

Re: Can We Consolidate Many FlexVols into one large FlexVol?

You could use SnapVault at the volume level to replicate the entire source volume to a qtree on the target.  So, every source volume would be a qtree in the target volume (the same volume).  You specify the volume name ast the source of the vault instead of /- or /qtree ... Then you would need to snapmirror convert the vault and then snapmirror break the mirror and you will have all the data writable on the target.  You need snapmirror on the target, snapvault secondary on the target and snapvault primary on each source system to do this.  You then have to recreate the cifs shares as needed to the new paths...and the path will be one level deeper... all source volumes will reside inside a qtree on the new target.  So, any source qtrees will become directories on the target below the new qtree for that volume.

It will take some planning, but this may be the fastest way to do this... using all ONTAP tools.  You could also look at AutoVirt, DFS or other tools that replicate and perform cutovers of shares.