ONTAP Discussions

0B Available on 3 LUNs - 8.1.1RC1 7-Mode FAS2240-2

FERRITERJ1
4,766 Views

Yesterday we've had three of our LUNs report 0Bs available which is causing some of our VMs to become inacessible. A talk to with VMWare showed that the LUNs aren't able to connected to our ESXi hosts at all, likely because of this.

 

All LUNs are from the same vol and aggregate. 

 

Is there a possible way to reclaim some of the unused space in order to regain connectivity? Or some alternative way to access the data on the disks? My method of last resort would be to take one of the LUNs offline and delete it, but would that free up the space and allow the others to expand in size? Or would that even be worth it?

 

I'm very new to NetApp so sorry if these questions are beginner level!

 

1 ACCEPTED SOLUTION

Mjizzini
4,717 Views

Delete snapshots of the volume if there are any.

To delete all Snapshot copies on a volume, use the -a parameter.

snap delete -a vol_name

 

If lun1 is not physically full, you can modify it to be thin provisioned.

 

If you have available or can free space  in the aggregate, you can grow the volume size.

vol size volx +200g

 

View solution in original post

12 REPLIES 12

TMACMD
4,759 Views

Well...sounds like thin provisioning in place.

 

Check the SnapShots for each volume hosting a LUN. Possibly delete snapshots on volumes with LUNs

Look at the volumes on the aggregate:

 

vol show -aggregate aggrx -field size,avail,used,space-guarantee

 

Do any of the volumes show as "volume" ? those are thick.

You could change those to thin:

vol modify -volume xxx -space-guarantee none

 

You will also likely need to bring the LUNS online. They may have gone offline if they ran out of space.

FERRITERJ1
4,751 Views

Thanks for the reply!

 

The command "vol" is not an available command.

This might sound stupid, but should I enter a command to enter a different area of the NetApp?

 

And it looks like LUN 1 is Thick Provisioned and LUN 2 and 3 are Thin.

TMACMD
4,738 Views

Oh Man...so sorry. I am used to questions be related to ONTAP and not 7-mode. That command did not work becuase it was 7-mode!

 

Still look for volume snapshots. They may be eating space in the aggregate that you could use.

Can you make the volume bigger? does the aggregate have any free space?

Mjizzini
4,718 Views

Delete snapshots of the volume if there are any.

To delete all Snapshot copies on a volume, use the -a parameter.

snap delete -a vol_name

 

If lun1 is not physically full, you can modify it to be thin provisioned.

 

If you have available or can free space  in the aggregate, you can grow the volume size.

vol size volx +200g

 

FERRITERJ1
4,709 Views

Thank you for the reply!

 

Looks like our Aggregate and Volume are showing 0B available as well.

We don't have any snapshots.

 

Even though it would result in losing the data and VMs stored on it, would it be possible to delete a LUN to regain space, expand the aggregate with the freed up space and expand the volume, and allow access to the other two LUNs?  My thinking is to expand the size on LUN that has the most important VM stored on it.

 

Or is this just crazy thinking? It's my last resort.

paul_stejskal
4,705 Views

Oh jeez that's not good! I had this one time and we actually made sure SCSI UNMAP (TRIM) was enabled and did a VMware space reclamation. Follow this: https://kb.vmware.com/s/article/2057513

 

When you get to the part about running vmkfstools -y, don't include the --reclaimblocks but just do -y 100 to reclaim 100%.

 

Also make sure you don't have any aggr snapshots:

snap status -A should do it if memory serves.

FERRITERJ1
4,659 Views

Thank you for the reply!

 

 

I tried reclaiming with vmkfstools but had no luck at all.

And checking the snap shots, we have none on this volume in question.

 

I think my last resort will be to delete one of the three LUNs to regain space? 

Is that possible or a viable solution at all? I've identified a LUN that houses VMs that we can live without if it means being able to expand space on the other two LUNs and gain access to them again.

 

 

Thank you for all of the help so far!

paul_stejskal
4,651 Views

You did confirm that the LUN is set to not reserve space (thin provisioned basically), and the volume has no space guarantee? Also the snapshot reserve is at zero on everything?

 

If this system is sending AutoSupports, what is serial #? I'll review configuration to see if I can find anything else. Even if it isn't under active entitlements we should still be able to view AutoSupports.

FERRITERJ1
4,647 Views

Yes, the LUN that I would delete is thin provisioned with no space at all. Snapshots are at zero as well.

This system is also not sending Autosupports. I took over this system last year, and, for whatever reason, the old Network Admin had that set to decline.

 

 

I know it really seems like we were set up for failure with how everything was set up, and go figure it's happening!

paul_stejskal
4,642 Views

Dang. Ok. I'm surprised the space reclamation didn't even work. Do you have enough space free on the VMware side (VMFS file system)?

 

Vaguely I seem to remember a bug on 8.1.1 about space reclamation being an issue/not working. It's likely given ONTAP version.

 

 

FERRITERJ1
4,622 Views

Yeah, on the hosts themselves we have free space.

So odd. Thank you for the help! We're just trying all of our options!

paul_stejskal
4,613 Views

Did you confirm if SCSI UNMAP is available on ESX? One thing you could do is possibly vMotion VMs to another LUN if you have enough space, then eventually destroy a LUN.

Public