The transition to NetApp MS Azure AD B2C is complete. If you missed the pre-registration, you will be invited to reigister at next log in.Please note that access to your NetApp data may take up to 1 hour.To learn more, read the FAQ and watch the video.Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.
While possible I don't know that this would net you the desired results. Any form of post-process data tiering is moving the data too late. If you are moving data based on old access patterns you are making the assumption that the data access pattern will remain consistent. It probably makes more sense to put your performance sensitive data on your best performing tier, monitor access using OnCommand Unified Manager, and then do 1 time vol moves to place the data where it needs to be if you see a historical trend of low/high performing IO. By leveraging some of our flash technology (FlashCache, FlashPool, FlashAccel), even data on SATA will be accelerated on demand and at the time of access. There isn't a need to mess with complicated data movement policies with NetApp.
TLDR: This is possible but requires some additional 'components'.
It is possible to do this and not a bad idea. The first challenge would be to 'identify' the Lun performance criteria and determine the thresholds. You could use the Performance Advisor datasource to gather this information and since WFA can access that system as a Data source. This will give you the data needed to make determinations. Now the next step would be to create appropriate Finders that will determine which Luns fall into the right 'grouping'. These finders will be used in the appropriate workflows to generate a Repeat Row for the moves.
Now, create three individual workflows (technically, this could be one but you might want to 'move' more frequently depending on needs and type). The idea would be to have a repeat row condition to find a 'condition' and list of Luns. Now, determine if the Volume containing the Lun matches the 'group list' of the correct aggregates. This could be based on the idea of a UM Resource Group or by name or by disk type (since the latter is cached this might be the easiest). Use a No-Op Cluster command to determine this value and set it to disable the command if 'not' found. Now, use a second No-Op Cluster command to find a 'suitable' aggregate based on the correct finder.
The next step will be to perform the Volume Move. Setup the command and use the information that has been found so far. The last step is to configure the Advanced tab of the Second No-Op and the Volume Move command to only execute if the Volume is not in the right aggregate (If the first No Op was not found). This step will help prevent errant moves.
Rinse and repeat based on your criteria. You should end up with three workflows based on the disk types. Once this is done, you will want to remotely execute these workflows based on a schedule task. Take a look at the REST Api guide for guidance on execution but in a nutshell, you would create a powershell script that is run on a task. This script would then execute the workflows.
Theoretically, this will work thought I have not built this on my end.