ONTAP Discussions

Adding Custom Metadata to Volumes

TMADOCTHOMAS
11,341 Views

Is there a way to create custom metadata to volumes in either 7-mode or cluster mode?  I need to assign departments to volumes so I can easily track how much each department is using.  I thought about using resource pools, but in Cluster Mode you cannot assign a volume to a resource pool, only aggregates.

9 REPLIES 9

JoelEdstrom
11,331 Views

7mode - Off the top of my head I'm not aware of any blank slate/metadata options in 7mode.  You do have some options if you use DFM/OCUM with the CLI 'dfm comment' commands.  If memory serves you create a 'comment field' and then apply that comment field with contents onto particular DFM IDs.  I don't have access to my lab at this exact moment but can post some examples later on if you'd like.

 

Cluster mode - you can add comments for volumes in cluster mode right in the CLI with the -comment argument on a volume.

 

cluster01::> volume modify -vserver test_vserver -volume test_volume -comment "this is a test comment"

cluster01::> volume show -vserver test_vserver -volume test_volume -fields vserver,volume,aggregate,comment
vserver      volume      aggregate      comment
------------ ----------- -------------- ----------------------
test_vserver test_volume test_aggregate this is a test comment

 

An alternative to both these methods is to use a common naming convention, if possible.  Many companies use some sort of unique ID system for departments, systems, applications, billing, etc.  If you have some unique identifier for the departments you could utilize that in the naming convention somehow, and then key off of that for reporting with a script to track usage.

bobshouseofcards
11,328 Views

Two possibilities come to mind to track the information you want.  I've used both in our cDot environment.

 

1.  Tracking quotas.  You could add a tree based quota for a dummy user where you have one user per department, then track the size used by that user for each volume you create.  The downside is this increases load on the controllers and perhaps it isn't easy for you to maintain a dummy user list.  Quotas are supported across both 7-mode and cDot though, so it makes for a common solution on both platforms.

 

2.  Comments.  Downside cDot only, but convenient.  As part of our standard volume creation we add a comment which identifies key operational stuff, like owning application, generation (whether source or DR copy, etc.) class of DR, stuff like that.  We use a JSON syntax that is easily extensible and easily convertible in scripts of any language to a data structure for processing as needed.  We have automated usage collectors across SVMs and clusters for an app as may be needed, automated DR setup and processing via snapmirror with external schedulers, etc.  The comment field isn't unlimited so we keep keys and values to minimal sizes, but it works quite nicely for both PowerShell and Perl based scripts.

 

 

Bob Greenwald

 

TMADOCTHOMAS
11,322 Views

Joel and Bob,

 

Very helpful comments, thank you very much!  I was not aware of the comment feature.  By the way, I am mostly focused on cdot as we are moving all prod volumes to cdot (we will leave test/dev on 7 mode for now).

 

One important follow up question: if we make use of the comment feature, is there an easy way to import this information into a spreadsheet alongside volume names, sizes, etc.  I'm checking in OCUM, but I don't see a check box to include the 'comment' field.  I will check and see if it shows up if I add a comment to a volume.  Would love to hear any additional thoughts.

bobshouseofcards
11,317 Views

Command line dump is not exactly automatic but it is doable...

 

Consider:

 

cluster::>rows 0

cluster::>volume show -fields size,used,comment

 

Of course you can customize the field list per your needs - just hit the tab key after the "-fields" parameter to see what you can choose from.  The "rows 0" command turns off the pagination and just dumps.  Combine both commands into an SSH script output to a text file (you can separate multiple commands with a semi-colon as in "rows 0; volume show -fields size,used,comment" for instance) and then import into a spreadsheet.

 

Downside is that you may need some preprocessing depending on your field list as the comments are not necessarily well delimited.  Or you could run one version with just the comment field and another for sizes, etc.  

 

Want to get fancy you can grab the NMSDK and turn the request into a series of API calls that return XML.  Or use the PS toolkit to make the same query from a PowerShell script.  If you are still using the old 5.X series OCUM - I too have found that cluster specific fields are pretty much not present in the reporting tools and have resorted to custom scripts to update data.

 

Bob

 

TMADOCTHOMAS
11,312 Views

Thank you Bob, great ideas.  I have a colleague who has done a lot with the PowerShell toolkit so I may see if he can come up with something.  Here's what I created earlier in response to the earlier comments.  I see that size and total show the "GB" and "MB" designations.  I assume we would have to strip those out in the code to allow the numbers to be seen as numbers and not text.  Any ideas here?  We can use the LEFT/RIGHT/MID Excel functions if needed but wanted to see if anyone had a better idea.

 

Thanks again, all, for the COMMENTS field idea ... very helpful.  I think this is the right direction for us.  I may check on the dfm comment option for our test/dev volumes in 7-mode as well.

 

vserver      volume            size comment      total
------------ ----------------- ---- ------------ -------
cithqvffs01t cithqvffs01t_root 1GB  Test Comment 972.8MB

JoelEdstrom
11,282 Views

I'd think Bob had it right in suggesting the NMSDK or PS toolkit if you want to start getting data from multiple command sets at once.  It's pretty easy to get specific pieces of data using their API.

 

This is an example, cut-down version of the returned XML output of a python API call using the NMSDK that would get you the comments and size info all at once. 

 

(everything measured in bytes by default, if memory serves)

 

...
			<volume-attributes>
				<volume-id-attributes>
					<comment>this is a test comment</comment>
					<name>test_volume</name>
					<owning-vserver-name>test_vserver</owning-vserver-name>
				</volume-id-attributes>
				<volume-space-attributes>
					<filesystem-size>53687091200</filesystem-size>
					<size>53687091200</size>
					<size-available>50999468032</size-available>
					<size-available-for-snapshots>53674270720</size-available-for-snapshots>
					<size-total>51002736640</size-total>
					<size-used>3268608</size-used>
				</volume-space-attributes>
			</volume-attributes>
...

 

Otherwise I'm sure you could take the output from multiple CLI commands and parse it in PowerShell pretty easily too.  Whatever you or your colleague would be most comfortable with, honestly.

ostiguy
11,256 Views

You may want to look at OnCommand Insight (OCI).

 

I have a federal customer:

 

Discovering all their Ontap estate and Vmware estate in OCI - OCI knows what VM sits where

Sourcing Host + VM application and business unit metadata from their CMDB, and annotating hosts and VMs with that data (annotations in OCI can be done manually, or via API, or via a CLI utility sourcing .csv files)

Annotating flexvols with application and business unit metadata

Using tier rules to annotate capacity with different tier values

 

All this flows into the OCI data warehouse, where reports can be written from the amalgamation of storage interrelationships, as well as metadata. This customer is programmatically executing sql queries against this data to answer questions like

 

"How many VMs did BU X have, which applications are on them, and which tier of storage do they use, and how much capacity has been allocated to them?"

 

OCI is the backbone of their storage and VM chargeback solution. Some customers do billing directly out of OCI, this one is doing a wide ranging IT chargeback elsewhere, but OCI covers the storage and Vm use cases allowing for cost recovery

 

Matt

TMADOCTHOMAS
11,250 Views

Joel, thank you for your comments, great advice!

 

ostiguy, I hadn't thought about Insight, thanks for the tip.  I will look at it more closely and mention it to management.

gilsmithjr
10,288 Views

I have linux bash scripts that probes each filer daily and obtains volume or other "Configuration" information , including comment , into a .csv file.

 

The .csv file is then converted into an html table.

 

The table has a very small amount of free jquery code which makes the "volume table" sortable by each column and search as you type BOX to filter the table down if your looking for something specific (kinda like grep across the entire table columns).  I also put in an "EXPORT to CSV" button in case someone wants to pull any table/list over into excell.

 

Now, what I thought of would be a really easy way to retrofit a comment onto 7-mode volumes was this :

 

filer> wrfile      -a           /vol/vol33/.comment       "Test Comment"

 

filer> rdfile                     /vol/vol33/.comment
Test Comment
filer>

 

 

My thought was then for the html / admin interface to allow a "volume" table across both which has an "editable" inline column for the comments.  That way they could be created and edited in the admin part of the interface and updated.

 

I have not yet explored how to do this via the powershell toolkit and powershell but I am sure it can be done.  Currently, all my files are bash scripts with lots of sed/awk simply running CLI commands , saving, and parsing.  But I do plan to devote good part of 2017 to learning the netapp toolkit and powershell to see if I can convert my system over.

 

Mainly , I see the volume comment as a documentation tool and none of the NetApp GUI interfaces that I know of allow you to view or edit it.  I don't think the dfm or ocu comments as serving this purpose and I like the fact when a comment follows the object , in this case , tied to the volume.

 

--Gil

Public