ONTAP Hardware

Missing 1TB of what should be free space

ig-091714
6,282 Views

I have a volume that is 7T in size. It has 40% snapshot reserve, so we are left with 4.2T usable. My GUI shows that the available space is 820G, however when i run an inventory via both linux and windows, i only see 2.1T used. So 7 * .6 - 2.1 - .8 = 1.3T. I should have an extra 1.3T that is just missing. I can't seem to find it. 

I'm running a FAS3240 if that helps

10 REPLIES 10

bobshouseofcards
6,209 Views

Remember that while snapshot reserve marks some space available for snapshot use only, if snapshots need more, they will take more up to the available size of the volume.

 

Without more detail about your snapshots, it's hard to say if that is going on, but given you have a 40% reserve (reflecting a large changed data rate or long retention or both) I'd hazard a guess that your snapshots are actually using more than the 40% you reserved for them. 

 

There isn't any option to limit how much space a snapshot is allowed to consume, or said more formally, how much change is allowed to the volume after a snapshot is taken.  

 

Another possibility to consider is the number of files being used and if they are small files.  Remember that the basic unit of allocation on a NetApp volume is a single block, or 4K.  Thus every file takes at least one block, or 4K.  If you had a 10000 files that averaged around 2K in size, your OS could show space "used" of around 50% of actual physical space used.  Depends on how you measure the space - by user bytes consumed or physical blocks consumed.

 

 

ekashpureff
6,187 Views
 


See also, the good old 'df' command on the NetApp.

I hope this response has been helpful to you.

 

At your service,

 

Eugene E. Kashpureff, Sr.
Independent NetApp Consultant http://www.linkedin.com/in/eugenekashpureff
Senior NetApp Instructor, IT Learning Solutions http://sg.itls.asia/netapp
(P.S. I appreciate 'kudos' on any helpful posts.)

 


rwelshman
6,186 Views

can you do a "df -g" on the filer for the volume to see the usage that way?

ig-091714
5,978 Views

Sorry about the late response - i never got notification that anyone had responded to my post!

 

When i do a df -g, i get the following:

/vol/ddsweb/ 4300GB 3813GB 486GB 89% /vol/ddsweb/
/vol/ddsweb/.snapshot 2867GB 11GB 2856GB 0% /vol/ddsweb/.snapshot

 

The snapshot reserve -as far as i'm aware, never hits that max number. When watching it throughout the day it grows to about 700GB, but never hits anywhere near the 2.8G that it is allocated.

 

When a linux admin does a du on /vol/ddsweb, it returns 2.1T - the same i get when i select all subfolders (and files on the root) of the share/export. So for some reason, the NetApp is telling the system that it is using 3.8G, but when an inventory of the files are done - it is only showing 2.1T used.

 

interesting bobhouseofcards, i didn't know about that. This share has 590k files - which if all of them are a small amount would make up 2.3T...but i don't really know the composition of the files within, so there could be some very very large files but most of them are tiny --- not sure...is there any way that i can confirm this without looking at each file independently?

bobshouseofcards
5,940 Views

The block size factor is a common thing to not be aware of.  In typical user file systems these days we are schooled to allocation units of 512 bytes, so the "wasted" space factor is generally minimal until you get to beyond millions of files.  

 

As you indicate, with your 590K files, your minimum starting space used is 2.3TB.  With that many files, it wouldn't take much to get to 3.8TB.  NetApp stores every thing as files using an "inode" model much where file meta data and if small enough the actual file data are stored together in the first block.  So even at 4K, a file takes a minimum of 2 storage blocks.  If roughly half of your files are at least 4K, you are already using at least 3.3TB.  It doesn't take much more to get to your 3.8TB.

 

"Lots" of small files is one place where most any eneterprise storage system has to make compromises.  They could use a 512 byte block size, but that also increases file system overhead maintenance by more than 8x.  At scale that becomes hugely important.  4K native block size is a good compromise between overhead space and performance, but there are edge cases that expose the block size as a liability.  Lots of small files is one of them.

 

Depending on your environment, there is a way to address this particular case.  If you know you will have lots of small files, it would be more efficient from a space point of view to provide a LUN from the NetApp to a Linux server.  That server could create a filesystem on the LUN that uses a smaller block size internally, like allocation units of 512 bytes.  Then the server can share it via NFS.  Of course, there are tradeoffs.  The Linux server is a single point of failure.  Because the Linux server's read/write size would typically not align with the NetApp block size, NetApp performance would decrease.  You need NetApp iSCSI/FC protocol licensing and the appropriate connections.  And there is overhead through the Linux server as well.  The whole configuration is more complex, but this configuration maximizes space usage, if that is the over-riding concern in a situation.

ig-091714
5,870 Views

Interesting, that is all good to know. Is there no way to reduce the block size of a volume then on the netapp?

 

Additionally, in doing the calculations myself, I'm coming up with a different number. Maybe i'm not figuring things up correctly, please correct my math if i'm doing it wrong

590000 * 4 = 2360000 KB used from this block size configuration

2360000 / 1024 = 2304.6 MB

2304.6 / 1024 = 2.25 GB used

 

 

 

 

bobshouseofcards
5,862 Views

The block size is what it is.  No bigger, no smaller.

 

And you'll likely never calculate it exactly - what with metadata that you don't see, snapshot information, etc.  Just something of which to be aware.

ig-091714
5,859 Views

Well sure, I wasn't thinking that I would be able to get the exact number of "wasted" space based on the block size. But our calculations are off from each other by a multiple of 1024, so I guess what I'm trying to figure out is - are my 590,000 files using 2.25GB or 2.25TB based on block size alone.

bobshouseofcards
5,810 Views

Decimal points - Arggh!

ig-091714
4,582 Views

haha, those seemingly insignificant dots huh? 😉

 

With that said though, I'm back to square one on where all of this storage is.

Public