SolidFire and HCI
SolidFire and HCI
> Instead, the user has to create a new datastore of a desired size and copy data from the old datastore to the new data store.
Or storage-vMotion the VMs over to a new or existing volume.
> All SolidFire volumes are thin provisioned so the volumes do not reserve the physical space on the storage at creation and only consumes physical space as data is sent to the volume.
In other words, it's a non-issue unless one has hundreds of volumes and is close to hitting one of the (pretty high) maximum tested values. And even in that case, that is easy to address by storage-vMotion'ing VMs to consolidate volumes because that can be automated and is fully offloaded to SolidFire by VMware.
Hi Navaneeth,
Can you clarify what you are trying to decrease within SolidFire. Are you wanting to decrease the size of the overall SolidFire cluster? Or are you referring to decreasing the size of an active volume within the SolidFire cluster?
Thanks
Team NetApp
are you referring to decreasing the size of an active volume within the SolidFire cluster - referring to this
Cheers,
Hi Navaneeth,
Unfortunately you can not decrease the size of a volume once the volume is created. You can only increase the size of a volume. If you need to decrease a volume you will need to create a new volume and migrate the data over.
Thanks,
Team NetApp
Any specific reason behind not able to decrease, please explain in detail? volume we create in solid Fire are not logical is that the reason?
Regards,
Navaneeth Reddy
All SolidFire volumes are thin provisioned so the volumes do not reserve the physical space on the storage at creation and only consumes physical space as data is sent to the volume.
Reduction in volume size should be a file system level activity because different host OSes handle volume reductions differently or don't handle the size reductions at all.
For example, a user will cause file system corruption if he or she decreases the volume size at the storage level to less than the physical size that th file system has actually consumed such as trying to shrink a volume to 100GB when the file system already had written 120GB to the volume.
As another example of a host restriction, VMware does not allow a user to shrink the size of an existing datastore. Instead, the user has to create a new datastore of a desired size and copy data from the old datastore to the new data store.
https://kb.vmware.com/s/article/1004510
By not allowing a user to reduce the volume size on the storage side, SolidFire volumes can be more file system friendly for all file system types.
Reclaiming space within an existing volume after a host deletes data is accomplished with scsi unmap.
> Instead, the user has to create a new datastore of a desired size and copy data from the old datastore to the new data store.
Or storage-vMotion the VMs over to a new or existing volume.
> All SolidFire volumes are thin provisioned so the volumes do not reserve the physical space on the storage at creation and only consumes physical space as data is sent to the volume.
In other words, it's a non-issue unless one has hundreds of volumes and is close to hitting one of the (pretty high) maximum tested values. And even in that case, that is easy to address by storage-vMotion'ing VMs to consolidate volumes because that can be automated and is fully offloaded to SolidFire by VMware.
Thanks for the detailed explanation.
Regards,
Navaneeth Reddy.