Hello, I have an SVM that has two volumes as assigned objects in it’s default export policy I need to give access to just one of the two volumes to another server. The environment is red hat 8 running on tap 9.15.1 p7. I am thinking I can grant access to just one of the volumes by adding a rule to the existing export policy but I am not sure of the command to run to do this. Is this possible? Or do I have to add the new server into the default export policy and somehow try to restrict this new server from mounting both volumes..?
... View more
In ONTAP 9.13.1, in the Trusted Certificate Authorities, one of them is named "admin." I vaguely understand this to be a built-in cert, but it's expired. The scope is at the cluster level, so I'm wondering what the implications are. Just doing a CSR for a CA-signed cert titled "admin" doesn't seem like best practice; but I was also led to believe that this principle may be tied to some critical components of the NetApp. That may be a misnomer given that the name is "admin" which is also the name of the local account. I could use some clarity on this; I'm a bit new to engineering NetApp. NOTE: Our NetApp is part of an air-gapped network.
... View more
Hi Experts, Could you help to confirm how can we configure claim rules in CloudGate to enable SAML authentication when login to ONTAP system manager? What is the pre-requisites? Thanks, Polar.
... View more
Hello Netapp World, I need you collective help regarding a Root Volume Recovery process. We have a bit old FAS8200, 9.3P1 that was powered off for a while. Recently it was powered on but neither of the 2 nodes in the HA can boot. After the replacement of node1, we had to fix a System ID and version missmatch and after a netboot, node 1 managed to boot normally. node2 still remains down and as it seems with a missing root volume. I have already tried a Netboot install (option 7 from boot menu) , tried restoring from backup (opt 6) after the netboot install via an http server, the node informs at boot menu that "boot device has been changed, Normal Boot is prohibited" Stuck at this situation as volume options during maintenance menu are not available - therefore i cannot create the root while the node is down and so far I have strictly avoided Option 4(wipe disk data/config). I was wondering if i would be able to remove node2 from the cluster by breaking the ha and removing the ha interconnect. then try a clean install on the node2 again. Yet, I am concerned about the data disks and causing another system ID and disk ownership missmatch.
... View more
Hi All, We have a NetApp cluster, OLDNode-01 OLDNode-02 OLDNode-03 OLDNode-04 NEWNode-05 NEWNode-06 NewNodes-05 and 06 are our new controllers and I believe I have finished migrating everything from OLDNodes-01-04 over to 05 and 06. We next want to remove Nodes 01 to 04 from our cluster completely. Is there a nice easy way to do this? Is it non-disruptive? Are there any final checks I need to do? Is ONTAP smart enough to know if things could go wrong and tell us beforehand? Just wondered what your experiences were. If there are things we need to know and basically make sure this runs as smooth as possible without disruption (or a minimum)
... View more