ONTAP Discussions
ONTAP Discussions
Hello,
We will be installing two 10 GB NIC's (one per controller) in our FAS3210 later this afternoon and I wanted to post the question here to see if there is anything we should be looking out for. Or if someone has some steps for us to follow, that would be even better. I tried navigating the NoW site, but there is a huge delay (upwards of 5 minutes) when clicking links and the page finally loading, so I gave up.
In addition to that once the 10 GB NIC's have been installed, we are looking to begin migrating from typical Fiber SAN to NFS connected directly to our Cisco UCS topology. I'm curious if anyone has any horror storries, or input on this process as well.
Thanks,
Mark
Hi there, let's start with the first part of your question. The installation of the two 10Gb NICs.
We need more information before we can give you an accurate answer.
1. What version of Data OnTap are you running?
2. What slots do you have available in your 3210?
3. What cards are currently installed in the system, and in what slots?
4. What is the part number of the 10Gb NIC?
With this information we can go to the system configuration guides (Hardware Universe 2.0) and determine which slot is preferred for installation.
http://support.netapp.com/knowledge/docs/hardware/NetApp/syscfg/index.shtml
We can check the NIC specifications which contain information about the LEDs and where to find more info.
http://support.netapp.com/documentation/productlibrary/index.html?productID=61123
We can then use the Data ONTAP 8.0 7-Mode Network Management Guide to configure the interface once installed.
https://library.netapp.com/ecm/ecm_get_file/ECMM1277792
Hello, we will be upgrading to 8.0.4 just before the install of the new NIC. There are currently no other slots in use, and the part number of the 10 GB NIC is X1107A-R6.
Thanks,
Mark
Ok cool. So then the install should be pretty straight forward. Slot 1 or 2 is preferred for the installation of the NIC. If you are going to be upgrading OnTap prior to the event I would just suggest making sure that completes first before introducing any new hardware. One thing at a time ya know.
I'm checking the Net2 tool (http://net2.netapp.com/netapp/clientDownload.jsp) for detailed installation instructions. Will respond with additional info in just a few.
Thank you. Once the card is installed, we will be moving down the path of NFS storage for our ESXi environment. We are mostly concerned with failover with the NFS configuration. Should one of the FAS3210 controllers fail, how long would it take for full failover to the second controller?
"Should one of the FAS3210 controllers fail, how long would it take for full failover to the second controller?"
Here is the textbook answer.
A cluster consists of two systems in an active-active configuration. Unlike alternative active-passive clustering schemes, both systems in the cluster actively serve data during normal operation. Both systems in a NetApp cluster are connected to the same networks and disks. In normal operation, each system is responsible for data service from a subset of the disks. Should one system fail, the other assumes its identity and takes over its workload. Failover occurs automatically. Network File System (NFS) users notice a slight pause in data service while applications only see a minimal delay (but are not shut down and restarted). Common Internet File System protocol (CIFS) users, dependant upon applications, may have to reconnect to the filer upon failover completion. Failover can also be initiated manually for administrative purposes, allowing one system to be taken offline for maintenance or hardware upgrade while maintaining continuous data service, and further reducing planned downtime.
The exceptional availability of NetApp clusters results in large part because of a high-speed, low-latency interconnect that joins the two cluster members together, allowing all NVRAM file system journal log data to be mirrored. Mirroring file system data stored in NVRAM ensures that one appliance can take over seamlessly and immediately from the other with absolutely no chance of data loss. The WAFL file system uses NVRAM to log all file system data and metadata. Since all NVRAM data is mirrored, data loss is eliminated. Other clustered servers typically cache data and metadata in system memory. This data is subject to loss if a failure occurs. Recovering lost metadata can significantly delay the time needed to bring data back online.