We are testing some functionality in the new REST API, and I have been having issues with updating volume comments on a couple filers. Filers I have tested on are running: NetApp Release 9.7P11D1: Wed Jan 27 17:40:19 UTC 2021 NetApp Release 9.7P8: Thu Oct 15 04:11:57 UTC 2020 NetApp Release 9.8P4: Mon May 03 09:22:00 UTC 2021
I suspect there is some issue with the job control in some way, but I have not been able to verify. Have poked around in logs in systemshell as well as checked 'job history show' for any clues. I submitted a case to Support and they suggested I ask here.
The program I am using to do the updates: import netapp_ontap from netapp_ontap import config, HostConnection, NetAppRestError, utils from netapp_ontap.resources import Svm, Volume, Aggregate, VolumeMetrics, Cluster, Node, CLI from pprint import pprint as pp import logging logger = logging.getLogger() logger.setLevel(logging.DEBUG) ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) logger.addHandler(ch)
Actually a better place to ask REST API questions is on Slack in the Netapp API channel. Go to netapp.io and click on the Slack icon at top-right. Post your question in the #api channel and you should get a response fairly quick.
The issue is that a volume has 2 UUIDs: uuid and instance_uuid. The uuid was initially accepted, but the job reported a failure as described above. When switching to the instance_uuid, the call succeeded.
^^^This^^^ is a hot tip! Basically, anywhere the REST API asks for a volume UUID, it wants the instance_uuid. In my case it was the storage/file/clone API. Spent a solid 2 hours beating on this, getting