Workflow for SLC/SLO prototype

by aankur Former NetApp Employee on ‎2012-08-22 03:48 PM

Below workflow will do the following steps:-

1. Create the primary volume with the proper aggregate selection. (aggregate selection is based on diskType , raidGroupSize , HA partner , RaidType)

2. Setting the volume attribute based on space Guarantee (Thick provisioning , Thin Provisioning , Overwrites Succeed )

3. Create the QOS workload for primary Volume.

4. Create the mirror volume with proper aggregate selection. (aggregate selection is based on diskType , raidGroupSize , HA partner , RaidType)

5. Mirror volume is type “DP”

6. Create the QOS workload for Mirror Volume.

7. Create the Snapmirror relationship between primary and mirror volume.

Comments
Frequent Contributor

Hi Ankur,

Few questions on this workflow:

1. Why did you choose Perl over Posh for the commands? And all but create VSM are in Perl? Why mix of languages?

2. In the create volume command; for disk info you have queried DFM but for raid_size, raid_status, ha_partner etc. you are talking to cluster directly. Any particular reason?

3. Which version of DFM are you using? I hope ONTAP is 8.1.1; QoS support? If the DFM is 5.1 then you might as well write new cache tables to gather the data like raid_size, raid_type and write corresponding filters and finders to make the solution more robust. Basically using find charts. Btw, more C-mode support is coming up in WFA 2.0; in terms of more objects from DFM and corresponding cache and filters/finders etc.

Thanks

Tanmoy

aankur Former NetApp Employee

Hi Tanmoy,

Sorry for the late response.

1) Why did you choose Perl over Posh for the commands? And all but create VSM are in Perl? Why mix of languages?

-For my internship project I started all my command code in PERL because I was familiar with that language. The certified Create VSM command is in POSH but the version I needed for my workflow was bit different so I made a copy of that and change the object type of volume.

2) In the create volume command; for disk info you have queried DFM but for raid_size, raid_status, ha_partner etc. you are talking to cluster directly. Any particular reason?

- The reason for that was DFM is not monitoring the  raid_size, raid_status, ha_partner etc information which I needed for my aggregate selection. So that why I used different ZAPI's call to get the data and then find the common aggregate from both the data which I got from ZAPI's call.

3) Which version of DFM are you using? I hope ONTAP is 8.1.1; QoS support? If the DFM is 5.1 then you might as well write new cache tables to gather the data like raid_size, raid_type and write corresponding filters and finders to make the solution more robust. Basically using find charts. Btw, more C-mode support is coming up in WFA 2.0; in terms of more objects from DFM and corresponding cache and filters/finders etc.

-My DFM version is 5.1(cluster Mode). ONTAP 8.1.1 has some Qos support but the fields I wanted to set was not present so I just created the Qos workload for the volume.I wanted to do that only but as I mentioned earlier, DFM is not monitoring that data so I was not able to extend my schema to gather information.

I hope this information is useful for you.

Regards,

Ankur

Warning!

This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.

In accordance to our Code of Conduct and Community Terms of Use DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information
  • Copyrighted materials without the permission of the copyright owner

Files and content that do not abide by the Community Terms of Use or Code of Conduct will be removed. Continued non-compliance may result in NetApp Community account restrictions or termination.