ONTAP Hardware
ONTAP Hardware
Have a new FAS2720. Previously had a 2220 that was using 7-mode. This FAS2720 is my first experience with clustered mode and the "new" (to me anyway) version of ontap. The SVM method is definately new to me and the way you configure interfaces is somewhat different to 7-mode.
I have created an SVM that has both NFS and iSCSI for the purpose of attaching hosts directly to the netapp (ie, vmware and iscsi directly from servers).
When I create an interface and choose the proper SVM, I am unable to choose both NFS and iSCSI, if I choose one it then greys out the other. Apparently, it wants me to only run one of these protocols on a single interface at one time, it doesnt appear to want iSCSI and NFS being shared.
However, what I can do, is it will let me created two separate interfaces, 1 for NFS and then 1 for iSCSI, that both have the SAME IP address. In 7-mode I had 1 IP address as the connection point for all services which was simple and clean in my mind.
Is this acceptable to use a single IP ? Or should I be using separate IP addresses for both of these NFS and iSCSI interfaces?
Solved! See The Solution
Hi there!
Lots of discussion here - I see we haven't got a single post which can be marked as answer yet, so I'll summarise this post for anyone finding it in the future.
- ONTAP vs 7-mode requires a LIF on each node and ALUA setup for the client to failover pathing between them, while 7-mode used to move the IP address between nodes
- You can't use a single LIF for both ISCSI and NFS
- While you can with some difficulty use IPSpaces to use the same IP address on two different LIFs across two different ports for both protocols.. you REALLY shouldn't.
- ALUA client config for SUSE is covered by unified host utilities as detailed at https://library.netapp.com/ecm/ecm_download_file/ECMLP2496244
Hope this helps someone in the future!
NAS and SAN LIFs have different failover policies so you cannot really combine them in single interface. Use two different LIFs.
@aborzenkov wrote:
NAS and SAN LIFs have different failover policies so you cannot really combine them in single interface. Use two different LIFs.
Yes, it will not allow me to combine them both on a single LIF so right now i have 2 separate LIFs, one for NFS and one for iSCSI precisely how you state.
However, and this is my question: Even though I have 2 separte LIFS, the netapp does allow me to assign the SAME IP address for both of these separte LIFs. Is assigning a the same IP address OK to do?
Example:
LIF-1 for iSCSI - IP is 10.10.10.10
LIF-2 for NFS - IP is 10.10.10.10
both the same IP, but 2 separate LIFs, Is this acceptable?
Nope. it'll say duplicate IP. You also probably want a second IP for iSCSI too.
why are you needing to configure like this?
@LSOTECHUSER wrote:
Is this acceptable?
No. Even if it may appear to work on the same node (which I doubt) NFS LIF may migrate to another node at any time which will result in duplicated IP. Also to avoid dsruption during node failure (or planned reboot) for iSCSI you need at least one second LIF on partner node which must have different IP anyway.
So at the minumum you need 3 IPs.
Hello. This is one of the things coming to CDOT that may take a bit of getting used to, the complexity of it. There is a bit more of a learning curve. It's recommended and almost needed for iSCSI to have LIFs on every node (assuming a HA pair). The benefits of the tad bit of extra complexity is that you get more features, way better logging should you have problems, and way more ability to customize the system to meet your needs.
The only way you can have the same IP is if you have different broadcast domains (really different networks). This was so customers who provide storage hosting for their customers can have multi-tenant workloads on the same physical storage. That is a completely different use case than what you likely are doing, so I would suggest just configuring multiple LIFs and trying to mount so if you have a volume on node 1, it is connected to node 1's LIF by the client/DNS. SAN should auto-discover the best path.
@paul_stejskal wrote:The only way you can have the same IP is if you have different broadcast domains
You probably mean IPspaces.
You're right. Thanks for the correction.
Thanks for the clarification.
Interestingly, it does indeed allow me to create the iSCSI LIF using the same IP address as the address used with the NFS LIF. However, clients are not able to connect this way, and iSCSI discovery times out. Changing the iSCSI LIF IP to a different address as required fixes the issue and clients can connect.
Also, the reason I wanted to pursue a single IP is merely for the sake of simplicity. It is easier to simply have a "one stop shop" IP address that handles all protocols. You only need to know the single IP per node. This was how our setup was in 7-mode which is why I initially even tried. No big deal though, as we are only dealing with a handful of addresses anyway.
What version of ONTAP are you running? I just tried a several different senarios in my lab to get two lifs to share the same IP and was always met with " Error: command failed: Duplicate ip address 192.168.100.102"
So back to the why; In ONTAP some types of lifs will move (CIFS, NFS, cluster mgmt) during a failover and some won't (iSCSI, FC, node mgmt). Different types of lifs get different policies applied.
During a failover (unplanned, or planned) NFS/CIFS/Cluster Mgmt the NFS lif will move over to the partner node. The iSCSI however, will simple go offline, and it's up to the host to move data access to the surviving path.
On your 7mode set up:
Do you just have a single IP for each controller?
How are your current iSCSI target(s) configured?
Do you currently have multi pathing?
@SpindleNinja wrote:
What version of ONTAP are you running? I just tried a several different senarios in my lab to get two lifs to share the same IP and was always met with " Error: command failed: Duplicate ip address 192.168.100.102"
So back to the why; In ONTAP some types of lifs will move (CIFS, NFS, cluster mgmt) during a failover and some won't (iSCSI, FC, node mgmt). Different types of lifs get different policies applied.
During a failover (unplanned, or planned) NFS/CIFS/Cluster Mgmt the NFS lif will move over to the partner node. The iSCSI however, will simple go offline, and it's up to the host to move data access to the surviving path.
On your 7mode set up:
Do you just have a single IP for each controller?
How are your current iSCSI target(s) configured?
Do you currently have multi pathing?
Running 9.5P3 which is what shipped with the unit.
I was not aware that clustermode does not allow for iSCSI LIF failover as 7-mode did. The 7-Mode failover implementation for iSCSI was quite convient in our previous/current setup. We do not use multipathing, but instead we configure our hosts to simply utilize active/passive bonded interfaces using 10gb links which provide more than enough bandwidth and provide redundancy for our needs. In the event that the NetApp controller was lost, for example a takeover/giveback during ontap upgrades, the secondary controller would simply take over the interface and corresponding IP address and continue to serve the iSCSI connection to the host. No special host setup was needed.
Now, knowing that clustermode does not support this failover how 7-mode did, I'm not sure how I would implement HA for iSCSI on my hosts. Can you possibly provide high level on how HA is now configured for linux or windows hosts? If the host is supposed to now handle the failover, is a netapp component required on the host? Is multipath now required to obtain simple failover capability? Surely it is not..
The networking is (very) different to how 7mode operated, most people find it much simplier (once you get the hang of it.)
Here are some iscsi express guides for you to review:
https://library.netapp.com/ecm/ecm_download_file/ECMLP2496244
https://library.netapp.com/ecm/ecm_download_file/ECMLP2496229
https://library.netapp.com/ecm/ecm_download_file/ECMLP2496230
Do you have direct connection from host to FAS without switches?
Thanks for the guides, this is helpful. I was also able to find a KB that details setup for Windows servers using strictly windows MPIO without the Ontap DSM (https://kb.netapp.com/app/answers/answer_view/a_id/1030968/~/how-to-set-up-iscsi-mpio-on-windows-2008%2C-windows-2008-r2-and-windows-2012) I was able to get a windows machine working and the connection persists during failover tests perfectly.
Now, I am off to test the same on a SUSE linux server. Any chance you have come across any SUSE specific guides? I'm guesing it shouldnt be to hard and will be similar to red hat.
@aborzenkov wrote:
Do you have direct connection from host to FAS without switches?
There is a storage switch in-between. Hosts (VMWare, a few standalone servers) connect to storage switch and FAS connects to storage switch.
Hi there!
Lots of discussion here - I see we haven't got a single post which can be marked as answer yet, so I'll summarise this post for anyone finding it in the future.
- ONTAP vs 7-mode requires a LIF on each node and ALUA setup for the client to failover pathing between them, while 7-mode used to move the IP address between nodes
- You can't use a single LIF for both ISCSI and NFS
- While you can with some difficulty use IPSpaces to use the same IP address on two different LIFs across two different ports for both protocols.. you REALLY shouldn't.
- ALUA client config for SUSE is covered by unified host utilities as detailed at https://library.netapp.com/ecm/ecm_download_file/ECMLP2496244
Hope this helps someone in the future!