ONTAP Discussions

NFS Volume Showing Incorrect Provisioned Space.

justin_smith
19,645 Views

I have a 1TB NFS Volume on a NetApp that is showing 284GB of free space, with 739GB's free... the problem is, the Datastore is empty. No snaps, no dedupe, nothing.


We currently "ran out of space" on it, but come to find out, it wasnt really out of space. After evacuating VM's from the volume, we assumed it would be free space 900GB'sish. But its not. We have no snapshots, VSC isnt doing anything.

I've mounted the volume to a Linux machine (since its NFS) and here's the output:

Filesystem                                     Size  Used Avail Use% Mounted on

NFSCONTROLLER:/vol/DSNAME  1.0T  739G  286G  73% /mnt

Here's a LS command on the mount point:

linux# ls -al

total 16

drwxrwxrwx  4 root root 4096 May 16 09:48 .

drwxr-xr-x 23 root root 4096 Feb 13 11:00 ..

drwxrwxrwx  2 root root 4096 Aug 23  2012 .snapshot

drwx------  3 root root 4096 May  2  2012 .vSphere-HA

The NetApp sees the same space free/used on that side as well. Not sure what else to do besides call support.

14 REPLIES 14

VKALVEMULA
19,525 Views

i guess you ran out of inodes when you moved VMs on to netapp.

paste the o/p of --> df -i volname

justin_smith
19,526 Views
Filesystem                                     Inodes       IUsed         IFree    IUse%         Mounted on
Controller:/vol/VOLNAME                   31M         122           31M        1%                 /mnt

shaunjurr
19,525 Views

You probably really should look at the contents of the vSphere-HA directory.

S.

justin_smith
19,525 Views

They're empty

shaunjurr
19,525 Views

"They"?

ls -laq .vSphere-HA results in?

At worst, if you don't need to keep the volume, you can always destroy it and create a new one.

S.

justin_smith
19,525 Views

I meant "its" empty....

Deleting it is the easy solution, Im trying to find out why it happened.

Had to run it from the filer, but here's the output. Nothing thats 700+GB's

/vmfs/volumes/0d6ebd14-8c0f2657/.vSphere-HA/FDM-A80DCDCA-066C-4DE3-9387-599C27B75CF7-71038-18946ea-vcentername # ls -al

drwx------    1 root     root               4096 Jun  5 11:03 .

drwx------    1 root     root               4096 May  2  2012 ..

-rwxrwxr-x    1 root     root                 84 Jun  5 22:41 .lck-8239000000000000

-rw-r--r--    1 root     root            2229248 Jun  5 21:49 protectedlist

ALEX_SHELIKHOV
19,525 Views

Hello

I think you must turn off options fractional_reserve=100 on your NFS volume

Show current settings:

vol options /vol/your_volume/

Set options:

vol options /vol/your_volume/ fractional_reserve 0

justin_smith
19,525 Views

Fractional reserve is already set to 0.

Here's the details for the volume:

nosnap=on, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,

ignore_inconsistent=off, snapmirrored=off, create_ucode=on,

convert_ucode=on, maxdirsize=327680, schedsnapname=ordinal,

fs_size_fixed=off, guarantee=volume, svo_enable=off, svo_checksum=off,

svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,

fractional_reserve=0, extent=off, try_first=volume_grow,

read_realloc=off, snapshot_clone_dependency=off, dlog_hole_reserve=off,

nbu_archival_snap=off

kolsur
19,525 Views

It may be a possibility that any of the file thats created is SPACE guaranteed, than WAFL reserves that space for that.

Darkstar
15,470 Views

do an "aggr show_space -h" and post that here.

Could it be that the volume has been very large (say, 16TB or bigger) and has been resized down to 1TB?

SKGANTI275
15,470 Views

Hi I do run into same issue, moving VM to different datastore and deleting volume free up the space..

begasoftfiler
15,469 Views

Hi

Have the same problem. The problem is a Dedup Bug. The fingerprint database fills up the place in the volume.

Solution from Netapp Support:

***************

Workaround

    A cleanup of the fingerprint database on a volume impacted by this issue is accomplished by running the following command:

    >  s <vol>

Note: Potential exists for storage efficiency to be decreased after executing this manual work-around. Consult NetApp Technical Support before executing 'sis start -s' on the affected volume. This is a very long-running process as it is building a new copy of the Fingerprint Database (FPDB) and then purging the old to reclaim the volume space.

    If the workload on the storage system imposed by running sis start -s is extremely large, a support engineer can guide the customer to use the following advanced-mode command on the impacted volume:

    >  sis check –d <vol>

Note: Dedupe operations for new data will not be performed while 'sis check -d' is running, expect when additional volume space is to be consumed while running this command.

Free volume space equal to the size of the fingerprint database file is needed to run the sis check command. Ensure that there is sufficient free space prior to running sis check. Additionally, the -d option should always be used to clear any existing SIS checkpoints on the volume. The presence of a checkpoint will delay the results of the workaround and could also result in an out-of-space condition.

    If possible, upgrade to a fixed release of Data ONTAP instead of executing the above commands/work-arounds. The FPDB's will be purged of stale records without having to run the above commands during the volume's next scheduled deduplication job. A maximum of 8 deduplication jobs can be executed at one time; subsequent jobs will be put in queue.

 

4.     Solution

5.      Users should upgrade to Data ONTAP release 8.1.2P4 or later.

6.      After upgrading to Data ONTAP release 8.1.2P4 or later, the first time deduplication is run on each volume it will automatically remove these stale fingerprints. During this time you may experience deduplication taking longer than expected. Subsequent operations will complete at normal operating times. This process of removing the stale fingerprints will temporarily consume additional space in both the deduplication enabled FlexVol volumes and their containing aggregates.

7.      If your FlexVol volume or the aggregate containing the FlexVol volume is 70% full or more it is recommended to run the "sis start -s /vol/<volname>" command for systems that are on 7-Mode or "volume efficiency start -vserver <vservername> -volume <volname> -scan-old-data true" command for systems that are running clustered Data ONTAP. This command will delete the existing fingerprint database and build a new one on the volumes and aggregates. This may be a long running operation; however it will not require additional space and will resolve any pre-existing issues with stale fingerprints. Deleting the deduplication metadata does not affect the savings already on disk or the access to the logical data.

*************

Darkstar
15,469 Views

I guess you didn't read the OP's post. He specifically said

justin.smith wrote:

the Datastore is empty. No snaps,no dedupe.

Also, the dedup bug you mentioned doesn't occupy that much space in the volume (75% in this case!). But without any info on the OnTap version of the OP, and/or the output of "aggr show_space", we cannot do any more debugging here I think...

It could also be that the volume was once thin-provisioned and resized to a ridiculously large size (16TB or even bigger); if you do that, your metadata grows a *lot* and the space used by the metadata will not be freed when the volume is being shrunk again, which can result in something like what the OP sees

-Michael

begasoftfiler
15,470 Views

Yes you are right; i didnt read that properly.

Regarding the Dedup Bug: In my case the Dedup Bug consumed about 900gb in a 1TB Volume, so 90 %. After i've done a "sis start -s /volname" only 100gb of the 1 TB Volume were filled..

Regards

Public