AFF, NVMe, EF-Series, and SolidFire Discussions

Highlighted

blockClusterFull

My Block Storage Cluster Capacity is 9.6TB. So it gives an Error of blockClusterFull as I utilized the  7.9 TB / 9.6 TB. 

 

So I deleted one Virtual Machine in vCenter thinking that the Block Storage will also be freed up. This machine size was 1.30TB.  I deleted that machine and nothing happened to the Block Storage. 

 

How can I ensure that the 1.30TB of the VM I deleted also decreases from Block Storage? Please help.

10 REPLIES 10
Highlighted

Re: blockClusterFull

Hi,

 

Could you give us following info:

1) Filer model?
2) Ontap version?
3) Block storage?
a)Do you know which Data Aggregate is hosting the 'Datastore(Virtual Machines)' volume?
b)Data Aggregate  'size/used/available' in gb/tb?
4) Is the Volume inside the Data Aggregate : Thick or Thin?

 

Thanks!

Highlighted

Re: blockClusterFull

Adding on to ONTAPFORRUM:

Are you using Thin Provisioning on your volumes? Snapshots?

Highlighted

Re: blockClusterFull

The VM that I deleted was on Thin Provision. But it was assigned 2TB and at the time of deletion, it utilized 1.30TB.

Highlighted

Re: blockClusterFull

Hi

Thanks for your response.

I don't have much of an understanding of the Filer Model and Ontap version.

However, the actual Datastore where the VM was is telling something else. I attached the screenshot.

The VM that I deleted was on Thin Provision. But it was assigned 2TB and at the time of deletion, it utilized 1.30TB.

Highlighted

Re: blockClusterFull

I see...this is a 'VMware/SolidFire' configurations. I googled the error you mentioned 'blockClusterFull',  this term is not used in FAS terminology. It seems you have a 'SolidFIre NetApp Storage'. Is that correct ?

 

Here is a NetApp SolidFire KB:
https://kb.netapp.com/app/answers/answer_view/a_id/1072311/loc/en_US

 

Note: All datastores on SolidFire are thin provisioned so if a volume is reported as being "more full" in SolidFire than in ESX even after deleting a VMs, then unmap will help. The vmfs unmap command is an online operation that can be run with the datastore mounted and in use by VMs. It can be started from command line 'esxcli 'or from the SolidFire VCP plugin.

 

From screenshot, it appears there is enough space on Datastore, so i believe unamp should help free blocks on the storage side.  Follow the kb and let us know.  As a side note: If unmap does not help much, check the number of snapshots on your cluster side and try delete the oldest one that you can get rid of, and see if it frees  up space (Have no idea how GUI looks, as I am not a familiar with SolidFire).

 

Thanks!

Highlighted

Re: blockClusterFull

Thank you very much.

This makes a lot of sense and yes I am running SolidFIre NetApp Storage it is just that I don't have much experience with it. I am researching and learning day by day.

 

I  will try how I should access the Storage command line to execute the command:

$esxcli storage vmfs unmap -l DATASTORENAME

 

 

Highlighted

Re: blockClusterFull

You're welcome!

Here is another NetApp KB, it's bit older but shouldn't matter, it contains very useful information.

 

If VMware ESXi 5.5 or greater (https://kb.vmware.com/kb/2057513😞
#> esxcli storage vmfs unmap -l volumename

Highlighted

Re: blockClusterFull

On newer VMware versions unmap is automatically enabled but doesn't run instantlly, so if you're in a hurry you can run it manually as you figured out.

After unmap, on Element (SolidFire, NetApp HCI) you'd also have to wait until next Garbage Collection is complete (usually top of the hour) to see the latest status.

 

You may Accept as Solution earlier answers that were helpful.

Highlighted

Re: blockClusterFull

Thank you very much. This worked very well.

Highlighted

Re: blockClusterFull

Thank you 

NetApp Insights To Action
All Community Forums