Hello, I am trying to create a custom role to limit the rights of a domain-based service account we use exclusively to run PowerShell scripts. The role resides in the main cluster SVM and I've only given it rights to change the replication throttle setting as shown below. I assigned the role to the service account with the applications ssh and ontapi. When testing, it immediately generated this error: "Insufficient privileges: user '<username>' does not have read access to this resource". Apparently I need to give at least read only access to a certain command to allow it to log on in the first place. Does anyone know what that would be? Role Name: script Command / Directory: vserver options Access Level: all Query: -option-name replication.throttle.outgoing.max_kbs
... View more
The default tiering-minimum-cooling-days for a volume is 31 days with the "auto" tiering policy, I'm aware of that. I'm also aware the number of days can be adjusted from 2 to 183 days for a given volume, using Advanced priv level. However, I just had a requirement if the default 31 days can be changed for all new volumes? Perhaps even per SVM? Seems a bit of a nitpick, but here we are. I assume the answer is "no", and it would be easy to build into a workflow anyway when creating the volume. Anyone have a hint?
... View more
Am trying to check for unprotected volumes from command Line Interface and also a count of the volumes. The reason being that we want to prevent data loss as often times we experience volumes created without being protected. Kindly provide a solution. Many thanks
... View more
Hello, for increasing inodes I use the following command: volume modify <volume_name> -files <number_of_files> On a Data Protection Volume (Snapvault Destination) I get the following error when using this command: Error: command failed: Modification of the following fields: files not allowed for volumes of the type "Flexible Volume - DP read-only volume". Is there another way to increase inodes on these volumes? Regards, Christian
... View more
I added 2 new nodes (FAS8300) to an existing cluster (FAS8060) to decommission the FAS8060. I moved all LUNs and volumes to the FAS8300. I migrated successfully the Cluster management LIF (that was previously on node 1 of FAS8060 port e0M) to node1 of FAS8300 port e0e. But since I cannot access system manager using the the Cluster management LIF IP. Any one can help in this one?. Thanks.
... View more