Network and Storage Protocols
Network and Storage Protocols
Hi all,
Can pls help me interprete the available space?
n3300> aggr show_space -h
Aggregate 'aggr0'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG
4138GB 413GB 186GB 3538GB 0KB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
vol0 873GB 202GB volume
vol2 252GB 106GB volume
Aggregate Allocated Used Avail
Total space 1126GB 309GB 2615GB
Snap reserve 186GB 40MB 186GB
WAFL reserve 413GB 42MB 413GB
--------------
n3300> lun show
/vol/vol0/qtvms1a/lun0 60g (64424509440) (r/w, online, mapped)
/vol/vol0/qtvms4a/lun0 180g (193273528320) (r/w, online, mapped)
/vol/vol0/qtvms5a/lun0 60g (64424509440) (r/w, online, mapped)
---------------
n3300> df -h
Filesystem total used avail capacity Mounted on
/vol/vol0/ 671GB 495GB 176GB 74% /vol/vol0/
/vol/vol0/.snapshot 0KB 4324KB 0KB ---% /vol/vol0/.snapshot
/vol/vol2/ 250GB 249GB 486MB 100% /vol/vol2/
/vol/vol2/.snapshot 0MB 0MB 0MB ---% /vol/vol2/.snapshot
---------------
1. Aggr0 has total space of 3.5TB. Correct?
2. How much space is available in vol0? I think it should be around 600GB, but why the df -h command shows only around 176GB?
Thank you.
Solved! See The Solution
sorry my mistake, it was a typo:
snap sched -A aggr0
snap reserve -A aggr0
If you decide to claim back 186g you can run
df -Ag aggr0
snap reserve -A aggr0 0
df -Ag aggr0 > see that free space increased.
To fix your issue you can increase vol0 size. > vol size vol0 new _size
Eric
Well, some ideas
This is the lun status -b:
n3300> vol status -b
Volume Block Size (bytes) Vol Size (blocks) FS Size (blocks)
------ ------------------ ------------------ ----------------
vol0 4096 176060424 176060424
vol1 4096 471859200 471859200
vol2 4096 65536000 65536000
Yes, lun reservation is enabled.
So, can I still create lun on vol0 for about 300GB? I'm not sure what "else" is 200GB.
Hi,
1. You have put LUNs in the same volume as the system files, vol0. It is NetApp best practice and industry standard to NOT mix vol0 with any LUNs or other way of serving
data.
2. You could easily reclaim some space by turning snap reserve to 0 (zero) on your aggregate and having no snapshots at aggr. level. Unless you ever think you will overwrite your entire
aggregate there is no need for this space to be reserved. I d recommend turning snapshots off and having no snap reserve, but its your call. You might have the option to use it.
Regards,
Eric
He has volume of size 671G (as confirmed by vol status -b). How comes that aggr show_space shows that 873G are allocated for this volume? What consumes extra 202G? How is it related to either system volume or aggregate snap reserve?
Can you give us the "df -hr" output, which will show us the space used by the fractional reserve.
My guess is that the vol size shown by "df -h" + fractional reserve space = volume allocation in "aggr show_space". Your "df -gA" will also show a used that will be less than total space allocated as shown in "aggr show_space". Why NetApp does it this way, I don't know
n3300> df -hr
Filesystem total used avail reserved Mounted on
/vol/vol0/ 671GB 590GB 81GB 199GB /vol/vol0/
/vol/vol0/.snapshot 0KB 4324KB 0KB 0KB /vol/vol0/.snapshot
/vol/vol1/ 1800GB 1800GB 0GB 0GB /vol/vol1/
/vol/vol1/.snapshot 0TB 0TB 0TB 0TB /vol/vol1/.snapshot
/vol/vol2/ 250GB 249GB 486MB 0MB /vol/vol2/
/vol/vol2/.snapshot 0TB 0TB 0TB 0TB /vol/vol2/.snapshot
n3300> df -gA
Aggregate total used avail capacity
aggr0 3538GB 922GB 2615GB 26%
aggr0/.snapshot 186GB 0GB 186GB 0%
aggr1 3538GB 1802GB 1736GB 51%
aggr1/.snapshot 186GB 39GB 146GB 21%
Thanks for helping me. I'm really newbie here.
Nice, we found the missing 200 GB in the fractional reserve space. And yes, I also don't understand why NetApp did it this way
Assuming vol0 is the root volume, don't put LUNs in your root volume because it is not supported.
Hmm ... question: does NetApp reserve space immediately or only when snapshot is created? I remember having read something on this matter but forgot where.
If space is reserved immediately, df output makes sense. aggr show_space still not
OK, answering to myself. Quoting TR-3483: Data ONTAP removes or reserves this space from the volume as soon as the first Snapshot copy is created. There are some snapshots on vol0 (as indicated by snap reserve being != 0), which explains reservation.
Does not explain aggr show_space though
Message was edited by: aborzenkov
Ok, I've made mistake by using vol0 for LUN.
Can pls tell me the primary reason why I shouldn't do that?
Any remediation I can do about it? Maybe moving the LUN to other volume. Is it risky?
So, for this mistake, how much space I've wasted? Is it the 200GB?
My posting was pretty clear, I was providing a fix rather than trying to guess without info from the controller.
I ll explain for you if you want: If you have got no more disk for your volumes in your aggr. you can turn aggr. snap reserve off and get more space for your volumes.
I reckon you know this though.
Eric
Hi, it's me again. I think it's still related to this question.
Today, when I try to create a LUN on vol3, it errors:
n3300> igroup create -i -t vmware ig_sgvms12 iqn.1998-01.com.vmware:sgvms12
Mon Nov 23 10:01:24 GMT [lun.igroup.tmpFileWriteFailed:error]: Failed to write to the temporary igroup metafile (Initiator group change not stored on disk; unable to write new file).
igroup create: Initiator group change not stored on disk; unable to write new file
Mon Nov 23 10:50:00 GMT [wafl.vol.full:notice]: file system on volume vol0 is full
I've been reading Netapp forum and found 2 possible solutions:
1. Set Fractional Reserve to zero. But it's very risky because the LUN will go offline if space is out.
But, if I keep total LUN size to be lower than the Volume size, I should be safe, right?
2. Set snapshot off, but I have turned off snapshot on aggr0, haven't I?
Is there any workaround for this situation?
Is it by increasing the size of vol0?
Thank you.
This is related data:
n3300> aggr status
Aggr State Status Options
aggr1 online raid4, aggr
aggr0 online raid4, aggr root, nosnap=on
n3300> df -hr
Filesystem total used avail reserved Mounted on
/vol/vol0/ 671GB 671GB 0MB 242GB /vol/vol0/
/vol/vol0/.snapshot 0KB 4432KB 0KB 0KB /vol/vol0/.snapshot
/vol/vol1/ 1800GB 1800GB 0MB 0MB /vol/vol1/
/vol/vol1/.snapshot 0MB 0MB 0MB 0MB /vol/vol1/.snapshot
/vol/vol2/ 250GB 249GB 471MB 0MB /vol/vol2/
/vol/vol2/.snapshot 0MB 0MB 0MB 0MB /vol/vol2/.snapshot
/vol/vol3/ 500GB 499GB 17MB 0MB /vol/vol3/
/vol/vol3/.snapshot 0MB 0MB 0MB 0MB /vol/vol3/.snapshot
n3300> df -gA
Aggregate total used avail capacity
aggr0 3538GB 922GB 2615GB 26%
aggr0/.snapshot 186GB 0GB 186GB 0%
aggr1 3538GB 2302GB 1236GB 65%
aggr1/.snapshot 186GB 10GB 175GB 6%
n3300> aggr show_space -h
Aggregate 'aggr0'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG
4138GB 413GB 186GB 3538GB 0KB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
vol0 922GB 250GB volume
vol2 252GB 107GB volume
Aggregate Allocated Used Avail
Total space 1174GB 358GB 2615GB
Snap reserve 186GB 40MB 186GB
WAFL reserve 413GB 56MB 413GB
Aggregate 'aggr1'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG
4138GB 413GB 186GB 3538GB 0KB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
vol1 1821GB 1519GB volume
vol3 500GB 712KB volume
Aggregate Allocated Used Avail
Total space 2321GB 1519GB 1236GB
Snap reserve 186GB 10GB 175GB
WAFL reserve 413GB 206MB 413GB
n3300> vol status vol0 -v
Volume State Status Options
vol0 online raid4, flex root, diskroot, nosnap=on,
nosnapdir=off, minra=off,
no_atime_update=off, nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=off,
convert_ucode=off,
maxdirsize=9175,
schedsnapname=ordinal,
fs_size_fixed=off,
guarantee=volume, svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off,
fractional_reserve=100,
extent=off,
try_first=volume_grow
Containing aggregate: 'aggr0'
Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal
Hi,
From what I am seeing your vol0 is full, its in a different aggregate than your vol3.
vol0 is typically system volume with the operating system in it, it should never get full.
are you using it to share data?
What I would do if I was you I would map c$ on the controller and free up space.
Go into /etc/crash, look for old core dumps and delete if you dont need them.
what does
cifs shares
exportfs
give? can you post it here?
Eric
Hi Eric,
This is the data:
n3300> cifs shares
Name Mount Point Description
---- ----------- -----------
ETC$ /etc Remote Administration
** priv access only **
HOME /vol/vol0/home Default Share
everyone / Full Control
C$ / Remote Administration
** priv access only **
exportfs returns nothing. I haven't used it for NFS.
I think I have CIFS license, how do I set it up so that I can access it?
I use vol0 for LUN too.. which people say it's a mistake. I know. But, the total LUN size on vol0 is still below the volume size. It should be ok right?
Hi again,
Look at this:
n3300> df -gA
Aggregate total used avail capacity
aggr0 3538GB 922GB 2615GB 26%
aggr0/.snapshot 186GB 0GB 186GB 0%
You have reserved 186GB for aggr. snapshots but none are used. So you could free up 186GB of space in the aggr. but setting
aggr. snap reserve to 0%. please post here:
snap sched -A vol0
snap reserve -A vol0
Furthermore you seem to have enough space to increase the volume size
vol size vol0 750g > choose your size.
Long term I recommend moving the LUN. You can now see the impact of ignoring best practice.
Eric
Thanks Eric for guiding me.
n3300> snap sched -A vol0
Aggregate vol0 does not exist or is not online.
n3300> snap reserve -A vol0
usage:
snap list [-A | -V] [-n] [-b] [-l] [[-q] [<vol-name>] | -o [<qtree-path>]]
snap create [-A | -V] <vol-name> <snapshot-name>
snap delete [-A | -V] <vol-name> <snapshot-name> |
snap delete [-A | -V] -a [-f] [-q] <vol-name>
snap delta [-A | -V] [<vol-name> [<snapshot-name>] [<snapshot-name>]]
snap rename [-A | -V] <vol-name> <old-snapshot-name> <new-snapshot-name>
snap sched [-A | -V] [<vol-name> [weeks [days [hours[@<list>]]]]]
snap reclaimable <vol-name> snapshot-name ...
snap reserve [-A | -V] [<vol-name> [percent]]
snap restore [-A | -V] [-f] [-t vol | file] [-s <snapshot-name>] [-r <restore-as-path>] <vol-name> | <restore-from-path>
snap autodelete <vol-name> [on | off | show | reset | help] |
snap autodelete <vol-name> <option> <value>...
So, to claim that 186GB, is this what I can do?
snap reserve vol0 0
Oh I think you mean aggregate?
n3300> snap sched -A aggr0
Aggregate aggr0: 0 1 4@9,14,19
n3300> snap reserve -A aggr0
Aggregate aggr0: current snapshot reserve is 5% or 195272120 k-bytes.
yes, aggr. was what I meant. If you change aggr. snap reserve to 0, please delete all snapshots and
set snap sched to 0.
snap list -A aggr0 > delete snapshots
snap delete -A aggr0 snapshot_name
snap sched -A aggr0 0 0 0
Eric
sorry my mistake, it was a typo:
snap sched -A aggr0
snap reserve -A aggr0
If you decide to claim back 186g you can run
df -Ag aggr0
snap reserve -A aggr0 0
df -Ag aggr0 > see that free space increased.
To fix your issue you can increase vol0 size. > vol size vol0 new _size
Eric
Thank you very much Eric for the workaround. I really appreciate your very fast help
GBU.
Before setting snapshot reserve to zero:
n3300> df -Ag aggr0
Aggregate total used avail capacity
aggr0 3538GB 922GB 2615GB 26%
aggr0/.snapshot 186GB 0GB 186GB 0%
Afterwards:
n3300> df -Ag aggr0
Aggregate total used avail capacity
aggr0 3724GB 922GB 2801GB 25%
aggr0/.snapshot 0GB 0GB 0GB ---%
n3300> Mon Nov 23 12:42:02 GMT [wafl.snap.autoDelete:info]: Deleting snapshot 'hourly.0' in aggregate 'aggr0' to recover storage
But, still cannot create igroup:
n3300> igroup create -i -t vmware ig_sgvms12 iqn.1998-01.com.vmware:sgvms12
Mon Nov 23 12:43:57 GMT [lun.igroup.tmpFileWriteFailed:error]: Failed to write to the temporary igroup metafile (Initiator group change not stored on disk; unable to write new file).
igroup create: Initiator group change not stored on disk; unable to write new file
I guess I must increase the vol0 size? Why I cannot use the 186GB that has just been freed?