AFF, NVMe, EF-Series, and SolidFire Discussions

Highlighted

Decrease the size in Solid Fire

Hi All,

 

Can we decrease the size in Solid Fire? if not why.

 

 

 

7 REPLIES 7
Highlighted

Re: Decrease the size in Solid Fire

Hi Navaneeth,

 

Can you clarify what you are trying to decrease within SolidFire.  Are you wanting to decrease the size of the overall SolidFire cluster?  Or are you referring to decreasing the size of an active volume within the SolidFire cluster? 

 

Thanks

 

Team NetApp

Team NetApp
Highlighted

Re: Decrease the size in Solid Fire

are you referring to decreasing the size of an active volume within the SolidFire cluster - referring to this 

 

Cheers,

 

Highlighted

Re: Decrease the size in Solid Fire

Hi Navaneeth,

 

Unfortunately you can not decrease the size of a volume once the volume is created.  You can only increase the size of a volume.   If you need to decrease a volume you will need to create a new volume and migrate the data over.

 

Reference:  https://docs.netapp.com/sfe-117/topic/com.netapp.doc.sfe-ug/GUID-036F5935-4E4F-4E58-8B8E-2F780CD4A939.html?cp=4_0_8_1_5

 

Thanks,

Team NetApp

Team NetApp
Highlighted

Re: Decrease the size in Solid Fire

Any specific reason behind not able to decrease, please explain in detail? volume we create in solid Fire are not logical is that the reason?

 

Regards,
Navaneeth Reddy

Highlighted

Re: Decrease the size in Solid Fire

All SolidFire volumes are thin provisioned so the volumes do not reserve the physical space on the storage at creation and only consumes physical space as data is sent to the volume.

 

Reduction in volume size should be a file system level activity because different host OSes handle volume reductions differently or don't handle the size reductions at all.

 

For example, a user will cause file system corruption if he or she  decreases the volume size at the storage level to less than the physical size that th file system has actually consumed such as trying to shrink a volume to 100GB when the file system already had written 120GB to the volume.

 

As another example of a host restriction, VMware does not allow a user to shrink the size of an existing datastore. Instead, the user has to create a new datastore of a desired size and copy data from the old datastore to the new data store.
https://kb.vmware.com/s/article/1004510

 

By not allowing a user to reduce the volume size on the storage side, SolidFire volumes can be more file system friendly for all file system types.

 

Reclaiming space within an existing volume after a host deletes data is accomplished with scsi unmap.

Team NetApp
Highlighted

Re: Decrease the size in Solid Fire

> Instead, the user has to create a new datastore of a desired size and copy data from the old datastore to the new data store.

 

Or storage-vMotion the VMs over to a new or existing volume.

 

> All SolidFire volumes are thin provisioned so the volumes do not reserve the physical space on the storage at creation and only consumes physical space as data is sent to the volume.

 

In other words, it's a non-issue unless one has hundreds of volumes and is close to hitting one of the (pretty high) maximum tested values. And even in that case, that is easy to address by storage-vMotion'ing VMs to consolidate volumes because that can be automated and is fully offloaded to SolidFire by VMware.

View solution in original post

Highlighted

Re: Decrease the size in Solid Fire

Thanks for the detailed explanation. 

 

 

Regards,

Navaneeth Reddy.

Check out the KB!
Knowledge Base
All Community Forums