The transition to NetApp MS Azure AD B2C is complete. If you missed the pre-registration, you will be invited to reigister at next log in.
Please note that access to your NetApp data may take up to 1 hour.
To learn more, read the FAQ and watch the video.
Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.

VMware Solutions Discussions

Agrregate full, need to move!!


Hello, i've create a new thin volume called "DS2" to replace an old one called "vmware01" then i've start to moving all Vm's inside volume "vmware01" to new one "DS2" by Vcenter storage motion, but now the Aggregate is full and i can't access to a lot of VM's. I'm in the panic!

I've space in another filer, can i try to move the volume to another filer by this command ? :

ndmpcopy -sa <user>:<pass> -da <user>:<pass> source_filer:/vol/vol_source_name/folder/file destination_filer:/vol/vol_dest_name/file

This is the result of a df command:

Filesystem              kbytes       used      avail capacity  Mounted on

/vol/vol0/           158387408    5784652  152602756       4%  /vol/vol0/

/vol/vol0/.snapshot    8336176          0    8336176       0%  /vol/vol0/.snapsh   ot

/vol/vmware01/      2147483648 1796876604          0     100%  /vol/vmware01/

/vol/vmware01/.snapshot          0          0          0     ---%  /vol/vmware01   /.snapshot

/vol/DS2/           1616117760  525325292          0     100%  /vol/DS2/

/vol/DS2/.snapshot           0          0          0     ---%  /vol/DS2/.snapsho   t

/vol/DS1/           1616117760  756281724          0     100%  /vol/DS1/

/vol/DS1/.snapshot           0          0          0     ---%  /vol/DS1/.snapsho   t




Are we talking about NFS datastores or iSCSI/FC LUNs?

For the former scenario, changing the volume guarantee of the original datastore to "none" can solve the issue (if it is currently set to "volume")


"snap reserve –A aggr03 0"

replace aggr03 with the name of the aggregate! This will free up 5% for you!! Report back with any additional questions!!




If you aren't fired yet you could always try lowering your aggregate snap reserve to something lower than 5% (if its not at 0% already!) on the affected aggregate to give you some breathing room while you move data around. This has got me out of trouble a few times although it was on a snapmirror destination so no production workloads running.

A day has passed so I think you would have sorted this by now, No?

Good Luck


NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner