2015-10-15 03:52 PM
So I am deploying a 4 node cluster with about 10 aggregates per node.
Each of these aggregates will have about 100 volumes each.
I can iterate to create 10 aggregates per node (once per node).
The question is how do I iterate within the aggr creation to create 100 volumes in each aggr.
With the available options, I would have to iterate for every aggregate which would about 40 individual iterations.
Is there a better way to achieve this.
2015-10-15 10:02 PM
I have a very good solution for Nested Looping in WFA. I'll post it today. I'll also explain the logic I've used.
Preparing the writeup , so kindly wait for.
2015-10-16 04:16 AM - edited 2015-10-19 03:06 AM
WFA doesn't provide nested looping. But with some innovative and intelligent workflow design you can very well achieve it.
Let's see how.
I'm taking an example where I need to
Create 'l' aggregates.
On each of those 'l' aggrs, I need to create 'm' volumes.
And on each of those 'm' volumes, I need to create 'n' qtrees.
Its 3 level of nested structure but this design can take care of absolutely any level of nesting; 4,5...6. anything.
You first need to plan. Plan the naming pattern for the objects at each loop, it will be important. You need to find ways to keep the naming of objects created unique. Using time-stamp is very useful here.
The outer-most level i.e. the outermost loop will be a command that need to be looped 'l' times. Straight fwd. No issues here.
Now I need some filters.
For every inner loop, you need to create filters that will be able to return ONLY the objects created by the immediate outer loop. One such filter for every inner loop.
In our case the 2nd inner loop is to create volumes, so I need to have create a filter that will only return the aggrs that got created in the outer loop.
Similarly the 3rd inner loop i.e. create qtrees, I need to have a filter that will return ONLY the volumes created in the previous loop.
The best way to do this is to have "Filter with name like". So have your objects created in a naming pattern that doesn't exist previously.
My cluster have aggr0, aggr1. Now I create aggr with name like agregate___1, aggregate___2, aggregate___3 etc. Volumes like vol__<time_stamp>__1, vol__<time_stamp__2 etc.
So now I can have a filter to query aggregates with name like %aggregate___% which will return only the 3 aggrs created. Have one finder with only a single filter created for each of the filters.
For every inner loop, you would need a child workflow in the next row. The CW row will be looped 'm' times. In the child workflow you have only your command and loop it for member in group. Now you
you can use your finder to return each of the objects created in the immediate upper loop.
Its much easier to understand by looking at the workflow itself. So lets do it.
Import the given workflow Workflow_to_Create_l_aggrs__m_volumes_per_aggr__an
Provide inputs as the below image and preview it. provide smaller number of objects to understand the flow, then as you wish. see the flow to understand what's happening.
2015-10-19 03:14 AM
I've updated my last post with a new Dar file. The changes made are following
1. Dar version is for WFA3.0P2 and above. So it can be imported on WFA3.0P2 or WFA3.1 ( windows or Linux )
2. Fixed some bugs in filters.
3. Removed the dependency of a function to return the UTC time.
Note: Remember the logic used in this workflow to achieve Nested Looping. So every single time you execute this workflow, you need to provide new prefixes for aggregates, volumes and qtrees.
Execution1 has prefixes in object names as: aggr_netapp__01_
The next execution should have a different prefix pattern.. example: aggr_netapp__02_, vol_netapp__02_ qtree_netapp__02_ etc.
2015-10-19 09:52 AM
It makes sense now. I was trying to achieve this in the same workflow.
I have already created a workflow where it will create one volume per execution. I was hoping to achieve it within a single workflow.
I think WFA should support nested loops. It would seem this is a basic requirements for any automation tool.
I will follow up with Netapp on this.