ONTAP Discussions

Increasing a volume's size didn't increase more free space

ocarinanetworks
8,233 Views

We use a Netapp volume to run our test scripts on.  It was a 200Gb flex volume and it was maxed out at 100%, according to a "df" command.

I increased the volume size using the vol size +100G command.   The vol size command now shows the volume as being 300Gb.  But "df" still shows it as 100% full.  Why did the volume fill up so fast?  There were no programs or scripts that was generating more files -- our test scripts, logs, etc. are still 200Gb.

Wouldn't increasing the volume size by 100Gb make the "df" numbers go down?

7 REPLIES 7

adamfox
8,233 Views

That depends, NAS or SAN?

If NAS, is your snap reserve space over 100%.  If so you've over-run your snap reserve and it's spilled into the active filesystem and the increase wasn't enough to cover it.

If it's SAN, again, the space consumed by your snaphost may be chewing it up very quickly.

Also, was this ever a Volume SnapMirror destination?  If so, check for the vol option fs_fixed_size (or fs_size_fixed, I forget which way it goes).  If that is turned on, turn if off and see if that helps.

Hope this helps.

ocarinanetworks
8,233 Views

Adam,

It's a NAS solution.  It's just a volume created on a NetApp 3020 and then mounted on a client host.

I don't think we have volume snapmirror.  We don't have a snapmirror license installed.  vol options fs_fixed_sized=off BTW

How do I check the snap reserve space?  > snap reserve?  It shows my volume as 0%.  (let me know if this is the wrong way to check the snap reserve space).

chriskranz
8,233 Views

What is the output from "df -r"?

Potentially the filer has been forced to over provision the storage on your behalf as there was free space in the aggregate. This may be due to fractional reservation, or could be due to snapshot usage.

Is the volume thick provisioned (space guarantee = volume)?

ocarinanetworks
8,233 Views

Chris,

the output for df -r is:

Netapp01*> df -r bvt
Filesystem              kbytes       used      avail   reserved  Mounted on
/vol/bvt/            314572800   28482352   64764320          0  /vol/bvt/
/vol/bvt/.snapshot           0          0          0          0  /vol/bvt/.snapshot

Also, the volume is not thick provisioned -- the guarantee=file, not volume.

adamfox
8,233 Views

If it's a NAS volume, then I'm not expecting any space reservation issues.

The way you check your snap reserve is run df.  Each volume should have 2 lines:  One with the volume name and one with the volume name + /.snaphost.  It's the 2nd line that you want to check.  See what the percentage of snap reserve is.  If it's over 100%, then you've over-run your snap reserve and the space you added wasn't sufficient to give you more free space.  That can happen when you've got lots of blocks being chewed up by snapshots.  You have to either increase the space more, or delete some of your more space consuming snapshots until space returns.  It's the irony sometimes of snapshots, users deleting files doesn't always give you have more space since those blocks could be held by snapshots.

And definitely check if the vol option gruantee is set to volume.  If you've thin provisioned your volume, the rules change a bit.

ocarinanetworks
8,233 Views

Adam,

I ran df on the volume.  There's no snap reserve -- it's at zero percent.

Netapp01*> df bvt
Filesystem              kbytes       used      avail capacity  Mounted on
/vol/bvt/            314572800   20700540   72530820      77%  /vol/bvt/
/vol/bvt/.snapshot           0          0          0     ---%  /vol/bvt/.snapshot

The volume is not thick provisioned.  Now the usage level is down to 77%.  For some reason it was at 100% right after I did a vol size command.

Any theories as to why the disk usage still stayed at 100% even after increasing the volume size?

Thanx

/oca

chriskranz
8,233 Views

I think the containing aggregate might be full, and you are seeing the space come back because the background scrub process on the aggregate is freeing up space which is then available to this over-allocated volume.

Your volume is now about 300g, but your used space in the volume is only 20g, the available capacity is 70g, which is about 23% of 300g (hence 77% capacity). This available capacity is so low because this is the only space that it can grab from the entire aggregate.

As the volume is thin provisioned, it's size can stretch beyond the physical limitations of space available in the aggregate, and I think that is what you are seeing here.

If you run a "df -A", is this aggregate pretty full up?

The way to free up more space for this volume is to have more space in the aggregate. If the space is not available in the aggregate, you see it taken off the available capacity of any thin-provisioned volumes.

Does that make sense? I can go into more detail a bit later today if you would like.

Public