OpenStack Discussions

OpenStack Cinder volumes

forgosh
7,216 Views

In Havanna, the unified drivers allow me to place volumes on NetApp NFS exports from my 4 node cDOT cluster. The volumes are created as sparse files consuming very little initial space however I can not seem to over provision my FlexVol. As an example, if I have a 500GB thin provisioned FlexVol mounted to Cinder I can create approximately 490GB of volumes. These volumes do not consume space within the FlexVol so I still see nearly 500GB available space in the FlexVol. However, if I try to create another volume, the creation fails because of insufficient disk space in Cinder. This defeats the features on OnTap such as dedupe and auto-grow as I can't efficiently use the space allocated to the FlexVol.

Seth

1 ACCEPTED SOLUTION

akerr
7,215 Views

nfs_oversub_ratio=1.5 would give you 50% over subscription.

If you shrink your flexvol, then eventually (currently every 30 minutes) the cinder scheduler will be updated with the new max capacity. On subsequent vol create requests it will simply check eligibility using the same ratios, but against the new capacity. So it would be possible to have a flexvol become ineligible for new volumes by shrinking it. We will not auto delete volumes if the shrunk flexvol is over-over-subscribed; it just will not be used for new volumes.

View solution in original post

3 REPLIES 3

akerr
7,215 Views

Hi Seth,

There are 2 configuration options our NFS driver inherits from the generic NFS driver, nfs_used_ratio and nfs_oversub_ratio, which will allow you to over subscribe your NFS export.  Here is the description of them from the sample config file:

# Percent of ACTUAL usage of the underlying volume before no

# new volumes can be allocated to the volume destination.

# (floating point value)

#nfs_used_ratio=0.95

# This will compare the allocated to available space on the

# volume destination.  If the ratio exceeds this number, the

# destination will no longer be valid. (floating point value)

#nfs_oversub_ratio=1.0


forgosh
7,215 Views

What would I have to set each value to in order to oversubscribe by 50% Also, what happens if I provision a volume to 500GB and then shrink the FlexVol to 100GB assuming that there is no space actually in use by the volume?

akerr
7,216 Views

nfs_oversub_ratio=1.5 would give you 50% over subscription.

If you shrink your flexvol, then eventually (currently every 30 minutes) the cinder scheduler will be updated with the new max capacity. On subsequent vol create requests it will simply check eligibility using the same ratios, but against the new capacity. So it would be possible to have a flexvol become ineligible for new volumes by shrinking it. We will not auto delete volumes if the shrunk flexvol is over-over-subscribed; it just will not be used for new volumes.

Public