2016-03-08 02:43 PM
We are building the network around our netapp cluster for which we have created 4 ifgrp's consisting of 2 ports each, 1x 10gb and 1x 1gb per controller (x2). We have also created several vlans to separate out our management traffic from our data access traffic. We'd like to have our management traffic on a public facing vlan in order to configure netapp autosupport but are currently experiencing problems pinging in or out of the management network. Upon initial setup, the cluster management was on a private vlan which has now been modifed to go through the public facing vlan, however the private vlan remain as the default gateway in the routing table for all traffic i.e. 0.0.0.0/0 destination [private vlan gateway]. If we add a static route for the public vlan, we're not able to ping in or out through the network on the public vlan.
Can someone point me in the right direction, to be able to ping and route traffic from all vlans assigned to the netapp cluster / nodes. I have read that IPspaces may need configuring first and routes / gateway assigned to the IPspace but i'm not sure how to go about this...
Thanks in advance
2016-03-08 04:26 PM
I don't think you need an IPspace. Every SVM including the Admin (non-data serving cluster SVM) have their own routing table. net route show, then net route delete and add or modify as needed for the new route to the public side.
Also note you can have more than one management interface to different networks if needed.
2016-03-09 01:13 AM
We've not started creating svm's just yet, just checking if our network config is working. Would i need to add the static routes on each controller node?