2013-08-06 01:27 PM
We have a 2240 with 2 shelves and 2 controllers. When I look in VSC 4.1 in vSphere under Monitoring and Host Configuration I see both controllers but when I go to provision a new NFS datastore through VSC it only gives me one controller to choose from and it is not the controller (vserver I believe is the proper terminology) where I want to create the NFS datastore. We are running ESXi 5.0.0, 1117897 and the firmware on our NetApp is 8.1.3 7-Mode
Thanks for any troubleshooting advice :-)
Solved! SEE THE SOLUTION
2013-08-06 05:03 PM
Depending on whether you're running cDOT (clustered Data ONTAP) or 7mode on the controllers, it is possible that the version of VSC you're running isn't able to enumerate the interfaces/volumes/aggregates - or the vserver itself doesn't have any aggregates assigned.
If you're running 7mode 8.1.3, then I'd recommend updating VSC to 4.2 and retesting.
If you're running cDOT 8.1.3, I'd recommend verifying the vserver itself has aggregates assigned under "list of aggregates assigned" in the properties of the vserver on the controller. Further details about how to find this information and add aggregates can be found in this KB: https://kb.netapp.com/support/index?page=content&id=2017574
2013-08-06 05:10 PM
My apologies, I just realized that the KB posted is actually internal - while we work on remedying this and publishing it publicly, I wanted to ensure that you've got the directions on modifying the vserver properties:
To show the existing list of available aggregates: vserver show -instance -vserver <vservername>
To add aggregates to the list of available aggregates on the vserver: vserver modify -aggr-list <aggr to be added> -vserver <vservername>
2013-08-08 11:29 AM
Thanks for the reply. I was thinking of upgrading to VSC 4.2 but before I do anything I just want to make sure it won't screw up our production environment. We already have 2 NFS mount points connected by the ESXi servers using the controller that is not appearing in the "Provisioning and Cloning" section of the VSC.
I am not 100% sure how the interfaces are configured on the NetApp, there are a lot of IPs involved but what I find interesting is under "Monitoring and Host Configuration" -> "Overview" the controller in question is listed as using 192.168.200.44 but when I go to under the "Configuration" tab for an ESXi server then "Storage" I see one NFS mount point is connected to 192.168.200.44 and the second is connected to 192.168.200.39. I remember our consultant asking us for a second IP address per controller when setting up the NetApp (something having to do with one ending in an even number and the other in an odd number so it could spread the load out across different NICs). Could it be that I should have that controller configured under "Monitoring and Host Configuration" -> "Overview" using that second IP 192.168.200.39 instead?
I know at one point when our consultant set this up last month it worked, I was able to provision an NFS datastore from that shelf using the VSC.
Thanks for any help that you might be able to provide :-)
2013-08-08 11:37 AM
So upgrading to VSC 4.2 won't "impact" the production environment; in addition to the bug fixes, it has new RBAC features, but this won't break anything (especially if you test/keep it using Administrator and root accounts for vCenter and the controller respectively).
With the version you're running, the likely "answer" for why you can't see the interfaces/volumes "sometimes" is due to a known issue which was first patched in 4.1p1, but is fully QA'd within VSC 4.2, so 4.2 would be the recommended upgrade path.
When the controller is added to M&HC, regardless of which of the IP's appears within that screen, it will enumerate all IP's from that storage controller; in 4.1 it may not do that due to the aforementioned bug.
2013-08-08 12:30 PM
Cool, thanks! We are going to be moving our VSC to a different server tomorrow so we will install VSC 4.2 there and register it with our vCenter server and see if it helps the issue.