Ask The Experts
Ask The Experts
Hello, sorry if this is in the wrong discussion section. I have an issue with some of my read-only volumes sometimes not having space attributes.
for cluster in cluster_list:
server = netapp.netapp_login(cluster)
request = server.invoke('aggr-get-iter', 'max-records', 10000).child_get('attributes-list')
if request is None:
print "No aggregate attributes were found for cluster:", cluster
continue
request = request.children_get()
for a in request:
aggr_name = a.child_get_string('aggregate-name')
space_attrs = a.child_get('aggr-space-attributes')
if space_attrs is None:
print "No space attributes were found for:", aggr_name, cluster
continue
size = space_attrs.child_get_string('size-total')
used = space_attrs.child_get_string('size-used')
The error messages look like this:
No space attributes for: edadisk002_ro_6a torcfs01 No space attributes for: edagroup_ro_5b torcfs01 No space attributes for: edalicense_ro_5b torcfs01
And they aren't consistent. This script runs every five minutes but I'll get this notification for different volume(s) each time and only a couple times a day (or none at all). Any idea why this could be happening?
Please provide the below command output from the Command Line on the cluster from SSH session over putty etc.,
To see the OS version, aggr details and number and limits of ZAPI
::*> version
::*> aggr show-view
::*> ontapi limits show
> version
NetApp Release 9.1P1: Tue Feb 14 13:14:46 UTC 2017
> aggr show-view
Error: "show-view" is not a recognized command
> aggr show
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr1_torcfs01n05a_H
25.00TB 22.85TB 9% online 26 torcfs01n05a raid_dp,
normal
aggr1_torcfs01n05b_H
25.00TB 18.41TB 26% online 34 torcfs01n05b raid_dp,
normal
aggr1_torcfs01n06a_H
25.00TB 20.72TB 17% online 28 torcfs01n06a raid_dp,
normal
aggr1_torcfs01n06b_H
25.00TB 21.48TB 14% online 27 torcfs01n06b raid_dp,
normal
rootaggr_torcfs01n05a
1.47TB 1003GB 33% online 1 torcfs01n05a raid_dp,
normal
rootaggr_torcfs01n05b
1.47TB 1003GB 33% online 1 torcfs01n05b raid_dp,
normal
rootaggr_torcfs01n06a
1.47TB 144.6GB 90% online 1 torcfs01n06a raid_dp,
normal
rootaggr_torcfs01n06b
1.47TB 144.6GB 90% online 1 torcfs01n06b raid_dp,
normal
8 entries were displayed.
> ontapi limits show
Error: "ontapi" is not a recognized command
Just a note, this issue has been seen on almost all clusters in our environment.
Hi Anee, I'm not receiving email updates even though I am subscribed to this thread. I've sent you a message with my email included. Thank you