Hi all,
I've tried searching but haven't (as yet) found an answer to this; apologies if I've missed it (when a link to the answer will be much appreciated).
Problem:
NetApp itself claims a committed value on an aggregate of 50.7TB / 167% committed (31TB aggregate size total).
If I add up all the 'total' byte values revealed via SNMP for all the volumes in the aggregate* I get 46.7TB, which implies 147% committed.
My belief is, the difference is related to snapshot reserve allocation, but this appears to be not revealed via SNMP, nor via any NetApp API counter. All the .snapshot instances report zero total bytes from an SNMPwalk of the relevant OID table (.1.3.6.1.4.1.789.1.5.4).
I need to be able to acquire this data programatically per aggregate, i.e. via any combination of SNMP or API. This can be done per-volume and summing values, or per aggregate if there's a suitable counter or counters (can't see anything obvious though).
Any suggestions? Many thanks in advance.
*Using suitable mathematics on the HighTotalKBytes and LowTotalKBytes OIDs at .1.3.6.1.4.1.789.1.5.4.1.14.x and .1.3.6.1.4.1.789.1.5.4.1.15.x