ONTAP Discussions

LUN and VOLUME size difference

Dilmurat
151,388 Views

Hello,

 

I have several LUNs and VOLUMEs associated with these LUNs but I still do not understand why total space of VOL is different than total space of LUN.

I do not use snapshots and thinprovisioning. 

All LUNs and VOLUMEs are mapped to virtual environment (VMware)

 

NetApp Release 8.1.2 7-Mode

 

Details with screenshot are attached.

 

thanks beforehand

1 ACCEPTED SOLUTION

vims
21,644 Views

@Dilmurat wrote:

@vims wrote:

What i would do first is rephrase  what you said below

You have several volumes and luns assosiated with them .... and then review everything from that point of view.


It is always disks->aggregates->volumes->luns/qtrees but not luns->volumes.  In other words lun is not a container for volume.


just my 2 cents.


do you mean that after creating aggregate I must first create Volume? What should be size consideration? volume size should be more than LUN size?


Yup. you can't create lun in aggregate. In terms of sizes - it all depends on your needs and your enviroment. I attached TR 3483 for your information.
Hope it helps.

View solution in original post

11 REPLIES 11

JGPSHNTAP
151,256 Views

I see a 2TB lun and a 2TB lun presented to Vmware.

 

I'm not good with Gui's

 

Give me this command from the filer

 

vol size (volumename) 

and df 

 

df -Vg (volumename)

 

 

Dilmurat
151,206 Views

 


JGPSHNTAP wrote:

I see a 2TB lun and a 2TB lun presented to Vmware.

 

I'm not good with Gui's

 

Give me this command from the filer

 

vol size (volumename) 

and df 

 

df -Vg (volumename)

 

 


Hello,

 

outputs of commands are attached.

 

Again, why my VMware console shows less size than NetApp itself?

 

ScottMiller
151,245 Views

You presented a 2.26TB LUN via FC to your ESXi hosts, you created a 2TB partition, and you formatted it with VMFS.  From a VMware perspective, you have 2TB, and VMware will use its internal math to figure out how much space you have used in the datastore.  If you copy 1TB of data into the datastore, VMware will add up the bytes and tell you you've used 50% of the datastore, regardless of what is happening under the covers.  Storage Efficiency can dedupe that data, and consequently, you will use less storage on the back end, but no matter how efficient, ESXi will still tell you that you have used 1TB.  When you copy 2TB of data to the datastore, VMware will think your datastore is full, even if you dedupe it below 2TB.  2TB is the number you have to be concerned with, and you need to monitor from the VMware side.  Note that if you were using NFS, things would be different, because ESXi would use the filer's math.

 

For whatever reason, you provisioned your LUN as 2.26TB on the NetApp side instead of 2TB.  If you subtract 283.73GB from 2.26TB, you'll come up with 2TB, which is the size of your datastore partition.  Since you have not thin provisioned your LUN, the entire 2TB representing your datastore is earmarked as used.

 

You have deduplication (or compression, or both) enabled, and NetApp has deduped or compressed the LUN to ~1400GB, which is why you see 600GB free in your volume. 

 

I recommend that you increase your volume size so that it is as least as large as your LUN.  If you fill up your datastore, it may be just enough to run the volume out of space, which could offline your LUN and take down your datastore.

Dilmurat
151,204 Views

ScottMiller wrote:

I recommend that you increase your volume size so that it is as least as large as your LUN.  If you fill up your datastore, it may be just enough to run the volume out of space, which could offline your LUN and take down your datastore.


How can I increase volume size if it is more than LUN size at the moment?

vol = 2.26TB

LUN = 2TB

 

That's what confusing me still!!!

ScottMiller
151,169 Views

I'm very sorry!  I didn't read your graphic correctly, and I got the volume and LUN backwards.  Let me start over.

 

You presented a 2TB LUN via FC to your ESXi hosts, you created a 2TB partition, and you formatted it with VMFS.  From a VMware perspective, you have 2TB, and VMware will use its internal math to figure out how much space you have used in the datastore.  Currently, VMware thinks that you have used 1850.48GB (2048GB - 197.52GB free).  NetApp has deduped that and you actually have used 1447.9GB (2048 - 600.1GB free).  So, NetApp is using 1447.9GB to store 1850.48GB.  This means that NetApp is being more efficient with your data, but regardless of this, VMware will still run out of space when VMware thinks you have used 2TB, regardless of how efficiently you store your data on the back end.

 

This is the important point about block-based storage: the initiating OS is in charge.  If you want to know when you will run out of space, you have to check with the initiating OS.  

 

As for the volume, the size is probably adequate because the volume IS larger than the LUN.

 

 

siddarajugc
21,049 Views

Hi,

 

I have volume by name: dhcpserver_vol_bang which is mountd on VMWARE ESXI,

the available space and usable space are not matching.

 

/vol/dhcpserver_vol_Bang/     1126GB       59GB       24GB      98%  /vol/dhcpserver_vol_Bang/
snap reserve               0TB        0TB        0TB     ---%  /vol/dhcpserver_vol_Bang/..

 

total space :1126GB

used space: 59 GB

Available space : 24 GB

 

 

I wanted to reclimb my unused space on this volume, please give ur valuable suggestions.

 

 

regards

Siddaraju

 

vims
151,154 Views

What i would do first is rephrase  what you said below

You have several volumes and luns assosiated with them .... and then review everything from that point of view.


It is always disks->aggregates->volumes->luns/qtrees but not luns->volumes.  In other words lun is not a container for volume.


just my 2 cents.


@Dilmurat wrote:

Hello,

 

I have several LUNs and VOLUMEs associated with these LUNs but I still do not understand why total space of VOL is different than total space of LUN.

I do not use snapshots and thinprovisioning. 

All LUNs and VOLUMEs are mapped to virtual environment (VMware)

 

NetApp Release 8.1.2 7-Mode

 

Details with screenshot are attached.

 

thanks beforehand


 

ScottMiller
151,112 Views

Aye.

 

I would add to that a recommendation that you look into thin-provisioning your LUNs.  You're burning compute cycles to dedupe your data, but you're not taking advantages of the savings.  It's kind of like thick-provisioning your VMDKs...you're not making the most of what you have.

Dilmurat
151,092 Views

@ScottMiller wrote:

Aye.

 

I would add to that a recommendation that you look into thin-provisioning your LUNs.  You're burning compute cycles to dedupe your data, but you're not taking advantages of the savings.  It's kind of like thick-provisioning your VMDKs...you're not making the most of what you have.


can you go little more detail and explain how should I work with lun / volume usage?

 

what do you mean by "you're not making the most of what you have." ?

 

 

Dilmurat
151,094 Views

@vims wrote:

What i would do first is rephrase  what you said below

You have several volumes and luns assosiated with them .... and then review everything from that point of view.


It is always disks->aggregates->volumes->luns/qtrees but not luns->volumes.  In other words lun is not a container for volume.


just my 2 cents.


do you mean that after creating aggregate I must first create Volume? What should be size consideration? volume size should be more than LUN size?

vims
21,645 Views

@Dilmurat wrote:

@vims wrote:

What i would do first is rephrase  what you said below

You have several volumes and luns assosiated with them .... and then review everything from that point of view.


It is always disks->aggregates->volumes->luns/qtrees but not luns->volumes.  In other words lun is not a container for volume.


just my 2 cents.


do you mean that after creating aggregate I must first create Volume? What should be size consideration? volume size should be more than LUN size?


Yup. you can't create lun in aggregate. In terms of sizes - it all depends on your needs and your enviroment. I attached TR 3483 for your information.
Hope it helps.
Public