Hi Andrew,
Have you checked that there is sufficent space within the aggregate that hosts the flexvol on the storage? Is your flexvol volume thin provisioned (IE "space-guarantee = none")? Also what's the snapshot policy applied to the flexvol? If you are taking freqent snapshots whilst you are migrating to the volume you are probably consuming space for snapshots.
Here are some example commands to check the storage configuration:
cluster1::*> aggr show -aggregate testc1n1_aggr1 -fields size, usedsize
aggregate size usedsize
-------------- ------- --------
testc1n1_aggr1 147.7GB 3.18GB
cluster1::*> vol show -vserver vserver2 -volume nfs_data_001 -fields size, used, snapshot-space-used, snapshot-count, snapshot-reserve-available, snapshot-policy, space-guarantee
vserver volume size used space-guarantee snapshot-space-used snapshot-policy snapshot-count snapshot-reserve-available
-------- ------------ ---- ------ --------------- ------------------- --------------- -------------- --------------------------
vserver2 nfs_data_001 10GB 1.82MB none 1% default 11 507.2MB
cluster1::*> snapshot policy show -policy default
Vserver: cluster1
Number of Is
Policy Name Schedules Enabled Comment
------------------------ --------- ------- ----------------------------------
default 3 true Default policy with hourly, daily & weekly schedules.
Schedule Count Prefix SnapMirror Label
---------------------- ----- ---------------------- -------------------
hourly 6 hourly -
daily 2 daily daily
weekly 2 weekly weekly
Note: You might consider restricting your NFS clientmatch in your export policy rule to the subnet ofyour ESX hosts.
cluster1::*> export-policy rule show -vserver vserver2 -policyname default
Policy Rule Access Client RO
Vserver Name Index Protocol Match Rule
------------ --------------- ------ -------- --------------------- ---------
vserver2 default 1 any 0.0.0.0/0 any
cluster1::*> export-policy rule modify -vserver vserver2 -policyname default -ruleindex 1 -clientmatch 192.168.100.0/24
Hope that helps
/Matt
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.