2012-05-21 06:18 AM
I'm experiencing a weird issue with a client. They have a FAS2020 12X1TB which has volumes totaling about 2.8TB. The volumes usable capacity is 6.22TB (RAID-DP + 1 Spare) but the aggregate shows 5.3TB used. I can't for the life of me figure out where the extra 3TB has gone. This 2020 is a snapmirror destination for another 2020 which is 12X600GB SAS. Anyone encountered anything like this? I know at some point the client deleted all the volumes on the DR box and recreated the snapmirror relationships. Is it possible that OnTap did not complete reclaiming the free space after the original volumes were deleted?
Any help would be appreciated.
Solved! SEE THE SOLUTION
2012-05-22 05:43 AM
Strange. Did an "aggr show_space" and it revealed that I had a volume that was 2TB+, however, when viewed via OnCommand System Manager, it was 600GB. I increased the volume's size from 600GB to 620GB, and now "aggr show_space" shows the correct size. Thanks for the pointer. Not sure, however, why it was reporting different sizes in CLI & GUI
2012-05-22 02:11 PM
Are you using thin volumes? Are you sure you correctly identified which was misreporting? As in, did you verify the size of the volume on the filer with a vol size volname from the command line?
If a volume is thin provisioned, aggr show_space won't tell you the size of the volume. It will only tell you what has been allocated. If you have provisioned a 1TB thin volume and are using 100MB of data, aggr show_space will tell you it is 100MB in size.
In your case it sounds like it is backward from this, but while I have certainly encountered issues where the command line and GUI reported different values, I don't believe I have ever found the command line to be the one that was wrong. Just make sure you didn't resize a 2TB volume to 620GB!
2012-05-22 11:42 PM
The volume was thick provisioned, and yes, I verified the volume size from the CLI before the resize. And you were correct, the correct size was displayed in the CLI as 2TB+ while the GUI was displaying a size of 600GB. Keeping in mind that the snapmirror source for this volume is also 600GB.
I have verified the contents of the volume and no data has been lost, but like I'd mentioned before, I don't understand why there was such a large discrepancy in size.
2012-05-23 12:31 AM
Keeping in mind that the snapmirror source for this volume is also 600GB.
There is volume size and there is filesystem size. They are not necessarily the same, especially when we talk about SnapMirror destination. Check fs_size_fixed volume option and "vol status -b" output ...
OCSM may default to showing filesystem size (as this is what clients actually see).
2012-05-23 06:40 AM
I think aborzenkov is right. Looking at my setup, system manager is showing Total Space on snapmirror destinations as the file system size, not the volume size.
Is autogrow enabled on the source volume? My setup is slightly different due to thin provisioning, but I'd imagine that you solved a snapmirror problem the same way. Source volume sized to X, autogrows up to Y, destination volume sized to Y (because snapmirror destinations can't autogrow).
I wouldn't be surprised to see autogrow on your source volume set to a 2TB max. If this is the case, you need to change your destination back because as soon as the source grows past the destination size, snapmirrors will fail.