A NetApp cluster or 7Mode Array allows multiple login users with various roles and permissions for Role Based Access Controls (RBAC). This is very much required in a distributed environment where multiple users and shared resources. New roles or customized roles can be created and login accounts be created for those new roles.
WFA3.0 allows only 1 set of Credentials to be saved for every Cluster. So when any wfa user for any workflow execution, the same Login credentials for the cluster are used every single time.
WFA can restrict workflow availability on Operators based on categories, but it can't restrict the workflow execution for specific roles on Cluster. Operators can't have their credentials defined. I'm providing a solution where multiple and user specific credentials can be saved by WFA Admins/Architects for a single Cluster or 7mode Array. WFA users and tipically operators now will use the specific credentials defind for him/her by the Admins. Also optionally a default Credential for the Cluster which can be used by all WFA users can also be defined.
See the following example.
I have a cluster with IP: 10.226.162.45 . I have a defined 2 login accounts on my cluster with different roles.
So in my WFA I want that when user1 executes a workflow, he can only use the credentials of his account for any workflow execution. Same for user2.
For that I need to
Have 2 sets of credentials for the same cluster IP in WFA.
The WFA users user1 and user2 can only execute workflows using the credentials created for them and not credential of each other.
Attempting to get connect via someone else credentials should result in a failure.
I may even want to define a non-user specific default credentials for the same cluster. This credential can be used for all WFA users. This is what is available as of now.
What did I do? The Logic.
I defined a new way of adding credentials for a Cluster and I changed the logic of how WFA connects to it. Take the following example.
The new way is the following. Save credentials of the Cluster as username@Cluster_IP. I've modified the Code to connect to the cluster to handle this Credentials saving mechanism. The modification also handles case that users can only use their defined credentials to connect to the cluster and not other's.
So now I can add the 2 user credentials as:
And for user2
As you can see that I can do Test-Connectivity and it will succeed too.
That's all is needed. For the POC I have attached workflows with the following commands.
How to use it?
Download the WFAWrapper.txt attached here. Change its extension from .txt to .psm1 so that now it becomes WFAWrapper.psm1
Go to the location <WFA_Installation>\WFA\PoSH\Modules\WFAWrapper
Rename the Original file WFAWrapper.psm1 to something like WFAWrapper_orig.pms1. Copy the WFAWrapper.psm1 into that location.
Done. No need to restart any WFA services.
Import the attached workflow1 and workflow to help you understand how to use it.
Note: If you are not going to use this solution, despite copying the new WFAWrapper.psm1, still your WFA will continue to work as before. It has no regression impact.
I have a sample workflow available for the above 2 users user1 and user2.
Workflow 1: Connect to our given cluster and get the count of all qtrees in a given Vserver and volume.
I have 2 sets of user credentials created for both of them in my cluster with different roles. So as per our user roles, user1 can execute both workflows by using his own credentials. User2 can only call credentials assigned to him or the default(optionally) Cluster credentials if they are defined. User2 should not be able to call credentials of User1 in any workflow executions and proceed.
Using the above method the WFA admin has saved the User Credentials for the above 2 users in WFA.
The User1 can call his credentials and execute the workflow using the credentials assigned for him. See images.
If User1 attempts to use the credentials of User2, the following error is throw.
However, both users User1 and User2 are allowed to Execute workflow 1 by using the default credentials if they are defined in WFA.
This Solution has some limitations.
1. Your command parameter mapping needs to have no reference for Cluster/Array. This is because since in WFA DB the primary_address of the cluster will always be a single IP and due to parameter mapping, it get automatically passed as a command parameter argument. This is not what we want here. Is this a problem? Not really, but lot of WFA Certified commands have this mapping, so if you can use this solution on custom commands or cloned commands with this modificatiion. Example : see commands in workflow 1 and workflow 2 for their Parameter Mappings.
2. If you use User-Input of type SQL, then value should not be locked. It should be available to be modified so that operators can modify the value for the cluster IP.
Extending the solution:
The logic used here can be utilized not only for Clusters or Arrays but other credentials too. You can use it in your custom command code as well.
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.