Data Backup and Recovery

Powershell script runs fine on NetApp 7.3.6P1 filer, but not on 8.x filers (all 7-Mode)

JOSH_BRANDON
6,768 Views

Hello,

<disclaimer> I am new to PowerShell scripting </disclaimer>

A colleague and I cobbled together a script that attempts to log into one or more NetApp filers, collect information on the sizing of each aggregate, and output the results into a formatted Excel spreadsheet (can be new or existing). We were planning on using this as a way to automate our very high-level capacity reporting, which management has requested be done at the aggregate level. (Let's not get into technical discussions about why this isn't granular and volume/qtree levels would be more appropriate, etc.  That's for another day...)

We have the script running successfully on a test filer that is currently running Data ONTAP 7.3.6P1.  When we attempted to run the script against our "non-test" backup and production filers (Production running 8.1.2P4; backup running 8.0.2P6 and will be upgraded to 8.1.2P4 next week), we are hitting a series of errors (I will provide the script and the error output as attachments to this message). We attempted to work through the errors without success, which led me to believe that there may be a fundamental difference in the way that the NetApp Powershell toolkit handles running scripts on 7.3 filers versus 8.x filers. I did some light research on it, but the answers I read were scattered and didn't quite nail down any suspicions I have about this.

I have configured a test user account on the test filer with the capabilities listed below.  I kept running the script until it errored out, at which point I added the new capability in and saved/re-ran the script.  I did this until the script completed successfully and the output was as I expected.

Name:    testrole

Info:

Allowed Capabilities: api-aggr-list-info,api-system-get-info,login-http-admin,api-system-get-ontapi-version,api-system-get-version,api-license-list-info,api-volume-get-root-name,api-file-get-file-info,api-file-truncate-file,api-file-read-file,api-file-write-file

(The user "testuser" is part of a group (testgroup) that has the testrole associated to it. The account only exists on the 7.3.6P1 filer - after creating it I discovered that I could execute the scripts without specifying credentials -- are they cached? I'm not seeing any console messages about access denied on the 8.x filer WITHOUT having them configured...)

My question: Could someone review the script/permissions and see if there are cmdlets that aren't compatible with 8.x filers? ...or point me in the right direction to information that can assist me in doing so?

Explanation of Attachments:

- Successful_test_filer_output.xlsx: this is the expected output that is produced when I run the script against my test filer (running Data ONTAP 7.3.6P1 )

- aggregate_automation.script.ps1: this is the powershell script that fails when I attempt to execute it against the 8.x filers. It works just fine under the 7.3.6P1 test filer.

- error_output_against_8.x_script: this is a copy-paste of the errors that my colleague and I see when we run the script against the 8.x filers.

Thanks in advance to any/all who are willing to help.

Josh

Replaced changed_automation_script with a version that has the authentication piece removed (it wasn't being used anyway).

1 ACCEPTED SOLUTION

markweber
6,669 Views

looks like you missed the change on the line that sets the aggregate name -

in your new version, you still have:

$workSheet.cells.Item($lastRow,3) = "$aggrName"

try changing it as above to:

$workSheet.cells.Item($lastRow,3) = $aggr.name

and see if it works then

mark

View solution in original post

5 REPLIES 5

markweber
6,669 Views

i would guess that your 7.3.6P1 filer only has 1 aggregate per controller and your 8.x filers have more than one.

in your foreach ($aggr in Get-NaAggr) loop, you are calling get-naaggr multiple times and negating the loop because when you hit a system with more than one aggregate you are going to get back an object with multiple entries in it - hence the error about not being able to divide an object.

try changing the loop to this and see if it works:

foreach ($aggr in Get-NaAggr)

{

          # enter info into the spreadsheet

          $workSheet.cells.Item($lastRow,1) = $date

          $workSheet.cells.Item($lastRow,2) = $filerName

          $workSheet.cells.Item($lastRow,3) = $aggr.name

          $workSheet.cells.Item($lastRow,4) = [math]::Round($aggr.sizetotal /1gb, 2)

          $workSheet.cells.Item($lastRow,5) = [math]::Round($aggr.sizeavailable /1gb, 2)

          $workSheet.cells.Item($lastRow,6) = [math]::Round($aggr.sizeused /1gb, 2)

 

          # keep track on the rows

          $lastRow++;

}

(FYI - you would probably get a faster response in the powershell group: https://communities.netapp.com/community/products_and_solutions/microsoft/powershell)

mark

JOSH_BRANDON
6,669 Views

Mark,

Thanks for the reply! You were absolutely correct in that the test filer had a single aggregate and our production filers have multiple aggregates. I edited the for loop as you requested and the results are now properly being output into the spreadsheet.

$aggrName = Get-NaAggr | Select -ExpandProperty "Name"

Being that this line of code was removed, the aggregate names aren't displaying in the spreadsheet (go figure).  Is there a proper way to reformat my request to display the aggregate names in the script so that it will work properly?? I uploaded a copy of the new output to this message (Changed_Script_Excel.xlsx) and I will add a new copy of the source code as well (changed_automation_script.ps1).

markweber
6,670 Views

looks like you missed the change on the line that sets the aggregate name -

in your new version, you still have:

$workSheet.cells.Item($lastRow,3) = "$aggrName"

try changing it as above to:

$workSheet.cells.Item($lastRow,3) = $aggr.name

and see if it works then

mark

JOSH_BRANDON
6,669 Views

Worked like a charm. Sorry for missing such an obvious fix

I wanted to pass thanks from one University to another - you just helped us save about an hour each month of manual number crunching and data-gathering!!

Josh

markweber
6,669 Views

glad to help!

we push capacity data into MSSQL from our storage platforms for billing reconciliation and so we can report on trending, etc.

On the NetApp side, we use powershell to do all of that - it works really well.

I also just finished a powershell script to update a sharepoint list with volume level details (name, controller, provisioned size, used size, snap used, etc) so our help desk has better visibility into what is going on.

i'll probably post some of the underlying scripts to the powershell group so others can use them (or make them better)

mark

Public