ONTAP Discussions

Creation of metafile failed

gesturgis
4,593 Views

Hey folks,

Having a problem with A-SIS.  I created an aggregate, volume, and LUN.  The volume is 1000GB and the LUN is 900GB.  So I put some data on the LUN, and then enabled sis and did a sis start -s <vol>.  The operation finished successfully.

Then I put a lot more data on the LUN and attempted to start sis again with sis start <vol>, but I'm getting this error:

Creation of metafile failed: /vol/<vol>

I notice that when I look at df, the volume appears 100% full, so I'm guessing the system can't write the metadata&colon;

Filesystem               total       used      avail capacity  Mounted on
/vol/<vol>/          1001GB     1001GB        0GB     100%  /vol/<vol>/
snap reserve               0GB        0GB        0GB     ---%  /vol/<vol>/..

But - what about that 100GB?

Any workaround to getting sis run through this vol without recreating and moving all my data again?

Thanks,

Grant

---------------

5 REPLIES 5

vreddypalli
4,593 Views

Hi,

The metafile creation failed because of lack of space in the aggregate. Refer the article "kb45962" on now site.

Also, for enabling deduplication on LUNs you need to have thin provisioning.  I have seen deduplication on VMware luns with thin provisioning is more beneficial. 

Darkstar
4,593 Views

You certainly don't *need* thin provisioning for A_SIS on LUNs. If your vmdk's inside the VMFS are correctly aligned the asis-savings are exactly the same, especially once your VMFS datastore has been in use for some time (blocks getting allocated and freed again but not zeroed)

-Michael

vreddypalli
4,593 Views

Hi Michael,

I have a query

If we not enabling thing provisioning on LUNS, how are we going to see the space savings after deduplicatoin.

Darkstar
4,593 Views

try

df -sg

(or "priv set diag; sis stat; priv set")

also you could check "aggr show_space -g" which shows the total used blocks in the aggregate (after deduplication)

-Michael

vreddypalli
4,593 Views

Hi Michael,

Thank you very much for the clarification.

Public