Subscribe

volume size reported incorrectly?

Hi all,

I am trying to understand this behaviour. We have a 3tb volume which is carved up in 3 x1 Tb Luns.
Filerview reports this volume to be 100% used, which I understand.
We resized the volume to be 4Tb and the volume now appears to have 42Gb free. What happened to the rest of the new Tb?

We are not using snapshotting on the volume.

Also, is it best practice to dedicate a volume to just LUNs (like we did above) or should we do a mix of LUNs and files? Hope this makes sense

Re: volume size reported incorrectly?

What's the snap reserve on the volume? You can use "df -g" or "snap reserve" CLIs to show this info. Personally, I don't think mixing LUNs and files on the same volume is a good idea.

Thanks,

Wei

Re: volume size reported incorrectly?

Check if the fractional reserve is set to 100%. That is the default. If you are not going to use snapshots, set it to 0. If you use snapshots set it your daily change rate times the number of days you would keep the snapshot. You can also do this with volume auto_grow or snapshot auto_delete. It also appears that you may be using thick provisioned LUNS, using thin provisioned luns will show actual usage on the server side.

You MUST not mix file-system with LUN data in the same volume. While technically it will work, good practice dictates that you don't do this.

JT

Re: volume size reported incorrectly?

We had specified the reserved space to 0% for each volume. We did not turn the snap sched of initially so a few snapshots were made.
However that was only about 10Gb worth.The df -g command did not show any usage in snapshots.

After deleting these snapshots the size got reported as we would expect.

The 3.1Tb volume with 3*1 Tb LUNs now shows 97% in use. If we had another Tb to volume the used space shows 76 %

I still do not understand as to why deleting 10Gb of snapshots gave use a Tb back (for the 4Tb volume)

The fractured reserve is set to 100% for all volumes. I am still trying to understand what it really does

A new volume with no snapshotting or snap schedule but with 100 % fractured reserve behaved as we would expect

Hope it all makes sense

Re: volume size reported incorrectly?

Did you have any data written to the 1TB LUNs before the snapshots were taken? Fractional reserve reserves space intended for overwrites once a snapshot is taken. Setting it to 100% reserves 100% space in volume equal to amount of data you write to LUNs. Say you

have a 1TB volume and 500GB LUN. You write 200GB data in the LUN. When you take a snapshot, if you have fractional reserve set to 100%, 200GB space in volume is set aside for writes to LUN. The volume will report 300GB free space at this point. If you are not going to use snapshots, you can set the fractional reserve to 0 as mentioned earlier.

Hope this helps,

Bhavik

Re: volume size reported incorrectly?

Yes most likely we already had data written to LUN.
If no snapshots are enabled then the fractional reserve should not matter right?
How does one disable fractional reserve?

cheers

Re: volume size reported incorrectly?

Use the "vol options" to set the volume's fractional_reserve attribute.   -Wei

Re: volume size reported incorrectly?

How does the size set aside in the volume for fractional reserve react when the amount needed is not there?

In example, I have a 3.1Tb volume and 3 * 1Tb LUN and they each contain 300 Gb of data.

I assume the reserve space is just not there but as soon as you assign more space to the volume
the reserved fracture will claim it? At least that is what it appeared to do.

cheers