Simple announcement : http://dev.tynsoe.org/ybizeul/netapp-scripts/tree/master Feel free to comment if you are interested in the initiative. Purpose NetApp Manageability SDK (NMSDK) is provided by NetApp to let third party ddevelopment and scripts to interract with NetApp storage components (cDOT clusters, 7-mode controllers, OnCommand Unifued Manager, DFM). The SDK provided by NetApp is essentially a wrapper around the XML protocol used to exchange informations over HTTP(s). It means you essentially have to build a structure of NaElements (in essence representing XML nodes) implementing the required elements and children documented for each call. It leads to code hard to read and maintain, mostly spent in building NaElements and stick them together instead of performing straightforward API calls. NetAppObject.py is a module that lets you simplify your code and develop more efficiently with NetApp APIs. For example, to get the name of a NetApp cluster, the traditional way is the following code : "s" is a previously created NaServer object cluster_identity_get = NaElement("cluster-identity-get")
desired_attributes = NaElement("desired-attributes")
cluster_name = NaElement("cluster-name")
cluster_identity_get.child_add(desired_attributes)
desired_attributes.child_add(cluster_name)
result = s.invoke_elem(cluster_identity_get)
cluster_name = result.child_get("attributes").child_get("cluster-identity-info").child_get_string("cluster-name")
print cluster_name Using NetAppObject: result = NetAppObject.invoke(s,{"cluster-identity-get": {"desired-attributes": {"cluster-name"}}})
cluster_name = result["results"]["attributes"]["cluster-identity-info"]["cluster-name"]
print cluster_name Or even easier: result = Cluster(s)
print result.cluster_name
... View more
Hi ANDRIANAIVO, This is a known bug, it has been fixed in X64. You can easily fix that by editing the following file and copy the following content only in /opt/graphite/conf/graphite.wsgi import sys
sys.path.append('/opt/graphite/webapp')
from graphite.wsgi import application
... View more
Hi Stuart, FYI and for the records, the documentation is actually embedded in the SDK download, in the zedi editor. Yes, it took me a while to find out, and I'm not entirely sure it is not embedded somewhere else, but at least it is here if you need it. I'm on a mac, so I just run it with java -jar zexplore.jar in the terminal and dismiss the warning it gived, I'm sure the .exe on windows is easier. Once loaded, you choose the SDK version in the top left drop down, like "Ontapi 1.21 Cluster Mode", and you get a nice hierarchy of calls with attributes and return values.
... View more
What version of Data ONTAP are you running, in what mode? Is this Clustered Data ONTAP ? I will assume it is not, can you provide the output of : cifs access show Thanks
... View more
Shot in the dark : are you running perl 32bit on a 64bit install, which may require a 32bit version of SSLeay or openssl libraries ? I think I remember something similar while playing around with perl SDK on a 64 bit CentOS installation.
... View more
That reminds me a similar issue. It might just be related to SMB2 being enabled by default now. If you still have something to test, you could temporarily disable SMB2 and see if it "solves" the issue.
... View more
I think there is some confusion here. System Manager is the configuration tool that connects only to 7-mode filers or cDOT cluster, not to OCUM. OCUM is a management tool that mostly monitors your NetApp filers, but you don't use it to do configuration tasks. System Manager handles both 7-mode and cDOT filers/clusters OCUM has to be installed for one or another flavor. If your infrastructure has both 7-mode and cDOT systems installed, you should have two distincts installations of OCUM.
... View more
If you're talking FlashCache, yes, you can disable it using options flexscale.enable off Now, is that really what you want to do? Cache hit is actually a good thing. You might need to adapt your test to what you really want to know. From my experience, testing pure disk performances with no cache is a waste of time, you will not get any meanigful number that makes sense in a production context. Even if you want to design a multi vendor test, you don't want to strip any features to achieve a common ground for all the arrays you're testing. Implement best practices for all arrays, then run your test. How? Well, the best test, as you certainly may have heard, is the test that uses your production data. You really should try to implement that, but I know most of the time you don't get the data, you can't align the right resource to perform the software installs, you don't have the capacity, etc etc... So, in that case, try to qualify the workload the best you can : What is the data set (total amount of data for the workload), what is the working set (the actual amount of data used throughout the day), what is the read ratio, the nature of the IOs (sequential or random). One you have these informations, or a rough estimate, you can use stress test tools to get numbers that makes sense. You can also use simple tools like "SIO" (available in the tool chest) that will let you run workload of a certain nature. Also, trying to overload lots of SAS drives with a single server with a 1GB connection (for NAS) is not going to work. Make sure you have the gears to actually push the controllers. As for your hit rate, it depends how you run the test... reading the same data that fits into your flash cache will not let you go to the disks. Use a tool like SIO to work on a test file that is at least 4 times four flash cache amount and to a lot of random IOs, to my knowledge, that would be the "worst case scenario" for flash cache. Oh, and consider reduplication... if you file is full of zeroes and you reduplicate, everything is going to fit in cache 🙂 My 2¢
... View more
You cannot access ETC directory from the network. In some cases you might be asked by support to turn the "diag" account on and log into the "systemshell", which is basically the low level BSD shell for a node, and from here access or copy different files out, but you can't go in from the outside.
... View more
Compression and deduplication is kept when you move a volume to a different aggregate, this is a block level copy, like SnapMirror. The volume will be the same size at the destination.
... View more
Did some basic tests. First thing that hit me is that, like in a lot of situations, you need to : options licensed_feature.multistore.enable on Did you do that ? Otherwise the delete volume command never finds the volume. Then after that it almost looked like it was working, but I had one situation where, indeed, it looked like the volume created by clone wasn't found, but I need to start over after I enabled multistore and do some more tests
... View more
I would try : login password -username admin -vserver <cluster name> While logged in as admin. The impact is that you would have to change any reference to that password in external apps (System Manager, WFA, etc...)
... View more
Just to be clear, if "ITS Clone volume on Last Snapshot" command does not have congruenceTest, you will have reservation issue. Now, you said it is happening with certified commands as well ? Canyou provide a workflow that demonstrate the issue ? Using only certified ?
... View more
Ok, I got it... What is missing with this "hack" is the "congruence_test", I guess that one is used in the middle to check the cache for an object. Full DAR file attached For the record, here is the test I implemented : SELECT e.id FROM cm_storage.export_policy e JOIN cm_storage.vserver vs ON e.vserver_id = vs.id AND vs.name = '${VserverName}' JOIN cm_storage.cluster c ON (c.primary_address='${Cluster}' OR c.name='${Cluster}') AND vs.cluster_id = c.id WHERE e.name='${PolicyName}';
... View more
Yep, I finally figured, thanks for the answer! So, I have that workflow that creates a qtree and an export policy, empty at first Then another workflow that adds rules to the export policy. Here is what happens when I run the first workflow in the reservations : Here, cache is not updated, good, I expect that. My understanding is that "NO" means "I didn't get that one from OCUM yet" Then I run "Acquire now" on my OCUM data source in WFA, and here is how it changes reservations : Export policy is then marked as cache updated... but not the Qtree. It does not make sense because OCUM did not discover that export policy yet. So now, my second workflow will fail, saying that "No results were found. The following filters have returned empty results:". It looks like there is a incoherence between the process that refreshes cache and the one that populates the database : i.e. reservations says I got the export policy from OCUM, but the export_policy table does not list it. If I re-discover in OCUM, then run Acquisition from WFA, everything is back to normal and I can reference my export policy again, and both entries are marked as Cache updates Does that make sense ? Would that be a problem with the "hack" or the SQL query defined in the reservation section ?
... View more
Well, actually it is even weirder... Qtree creation is not populated in the qtree table either, but that Command is supposed to use reservation... really odd
... View more
I used your actually, copy and pasted the reservation block, but keeping my own copy of the Export Policy Create command. It looked consistent, even if I did not understand how variable substitution was done.
... View more
Great tip François. When I did the same thing, it seemed to work, until you do an acquisition in WFA. I don't know why yet, but what I got, looking at the reservations in WFA web UI, was a "Cache Updated" YES, for an export policy that was not refreshed in OCUM yet (It was "NO" before acquisition). The volume reservation had a good status of refreshed to "NO" (i.e. waiting for OCUM to report it). I might have done something wrong, I need to do some research
... View more