Can someone tell me what the AFF8060 and AFF8080EX CPU Specs are? When I look in hardware universe it says: Processor: 8020 = 2 x 64-bit 6-core 2.00 Ghz | 8040 = 2 x 64-bit 8-core 2.10 Ghz | 8060 = 100 to 120, 200 to 240[3] | 8080EX = 100 to 120, 200 to 240[3] I am completely baffled by the reference to:100 to 120, 200 to 240[3] (see the attachment) The attachement was right out of hardware universe and just downloaded as a pdf. I read aomewhere in tech-rag that Intel was developing a Haswell E5-2600 v3 for the likes of NetApp and EMC. Is that whats in there? If so how many sockets and cores??? Thanks everyone.
... View more
It is well documented in 7M that if you want to use NSE drives that the entire "system" (I am interpreting that means an HA pair) needs to have all NSE drives. My questions are: Now that we have CDOT, does that rule still apply? Meaning can 2 Nodes (1 HA Pair) have all NSE drives and the other nodes in the cluster have non-nse drives? Or do all disks in the cluster have to be NSE drives? If its the former, can you "vol move" between nodes in a cluster that don't have the same disk type? For example one HA pair has NSE drives and the others dont? If its the latter, can you SnapMirror between two systems that don't have the same disk type? For example one Cluster has NSE drives and the other has non-NSE drives? Thanks, Everyone
... View more
Yep, I get the importance of keeping that cluster interconnect cable live for NFS. I guess to get that same resiliency out of NFS you would really need to use pNFS which I don't believe VMWare supports today (even in ver 5.5).
... View more
Got it. So in summary. There is no reason why this design couldn't work for NFS or iSCSI with VMWare as long as you made the necessary provisions to set up your failover groups correctly for NFS and created LIFS on both nodes to access the LUN in an iSCSI set up. To bring this all home and circle back to what you were originally trying to communicate.. the real danger with this design is if you loose the single inter cluster node connection you could have unpredictable instability. Thanks for your time, Scott and helping me think through the entire design and its implications.
... View more
I think i get you.. so (setting aside single cable = single point of failure etc.) what you are saying is in NFS if the data facing port goes down the LIF will migrate to the other node and serve data across the cluster interconnect (no cf failover needed) However in iSCSI if you only had a single path ( single target IP from the Host) and the data facing port on the node went down that LIF wouldn't migrate and you are dead in the water.. Correct? I guess my disconnect is couldn't you just set up another LIF on the other node to access the LUN from, and configure multiple IP targets for the LUN at VMWare. Then let the VMWare PSP (Path Selection Policy) move the traffic to the surviving node once it senses its primary path is dead and the data will be served across the cluster interconnect just like NFS? Thanks for your time, Scott.
... View more
Thanks for your reply Scott. I was thinking if you can set cf takeover on network interface failure (specifically the cluster interface) at either the cluster shell or the node shell then you could mitigate or reduce your risk. i.e.) something like this: "cf.takeover.on_network_interface_failure" = on & "cf.takeover.on_network_interface_failure.policy" = any_nic & set the /etc/rc file to include the "nfo" flag. Or an I confusing 7M and CDOT capabilities? Lastly, I was originally thinking of this design for VMWare NFS but is there any reason it wouldn't work for iSCSI too since thats IP based? I understand in FC LIFS don't migrate hosts just use ALUA to get to an alternate path, but again iSCSI is IP based so is there any reason on takeover that wouldn't work? Thanks!
... View more
Well, to answer a part of my own question for everyone out there.. this is supported and it will work. See this document: https://library.netapp.com/ecm/ecm_download_file/ECMP1157168 "If you have an existing two-node cluster that uses cluster network switches, you can apply the switchless-cluster networking option and replace the switches with direct, back-to-back connections between the nodes. This is a non-disruptive operation. The procedure you use depends on whether you have two dedicated cluster-network ports on each controller (as required on most systems) or a single cluster port on each controller (a supported option on FAS22xx storage systems)" "Each storage system must be using a single dedicated 10-GbE cluster port providing the cluster-network connection. This is a supported configuration option for FAS22xx systems only." Now I'm just wondering the implications are if the cluster port on either controller fails..
... View more
Hi Everyone, I have a quick design/technical question regarding running CDOT on a FAS2240 with a single 10Gb Cluster Interconnect/Switchless Cluster Cable... I understand this is not best practice so please don't read me the riot act. However, technically will it work, and what are the downsides knowing if a port goes bad on either side we will have two nodes that can't communicate with each other in the cluster? I'm just trying to think if you only have 4 10Gb-E ports (2 on each controller), you could use just one from each controller for the cluster interconnect/switchless cluster and one on each controller to go up to the data network. That way you could get I/O into the controllers at 10Gb Speed from from the network on both sides and you can "vol move" over the cluster interconnect, as well as have the heart-beat over the interconnect for disk/controller takeover. I'm just trying to figure out how to make CDOT on a FAS2240 a practical option, because to me wasting your 2x 10Gb ports on each controller just for cluster interconnect/switchless cluster is just that.. a waste. Again if someone can validate the design will work but the caveat is (whatever that may be) thats what I'm looking for. Thanks. It seems that when you go through the "cluster setup" command it says you need "at least one".. in my mind that means it should be possible. See bold below (I changed the font to call it out but this is what the message says in sequence): login: admin (cluster setup) Welcome to the cluster setup wizard. You can enter the following commands at any time: "help" or "?" - if you want to have a question clarified, "back" - if you want to change previously answered questions, and "exit" or "quit" - if you want to quit the cluster setup wizard. Any changes you made before quitting will be saved. You can return to cluster setup at any time by typing "cluster setup". To accept a default or omit a question, do not enter a value. Do you want to create a new cluster or join an existing cluster? {create, join}: create Step 1 of 5: Create a Cluster You can type "back", "exit", or "help" at any question. List the private cluster network ports: At least one cluster port must be specified. The cluster network ports are the physical ports on the controller that connect it to the private cluster network. The controller routes cluster network traffic over these ports by using associated cluster logical interfaces (LIFS). Examples of cluster network ports in Data ONTAP are "e0a" and "e0b". You can type "back", "exit", or "help" at any question. List the private cluster network ports:
... View more
Hi Scott, I tried the config and it seems to work no problem with the exception that the FC iGroup would not allow me to use ALUA. Thats understandable since the iSCSI iGroup doesn't use ALUA. Do you see any issues with this configuration? Lastly you mentioned you wouldn't want to do this for multiple servers attaching to that LUN unless you were running a clustered file system. I assume VWWare/ESXi would be fine since vCenter controls the entire clusters access to the datastores, correct or no? Thanks!
... View more
Hi Everyone, I have a question regarding connecting a V3220 to a Clariion CX4-120. The IMT says that direct connect is no longer supported. That much is understood. However, when I was originally researching this configuration the only document I could even find from NetApp about V-Series connecting to EMC is the: "V-Series Systems Implementation Guide for EMC Clariion Storage" which was written back in 2009. My question is; is there any reason direct connect technically wouldn't work with the latest DOT software (8.1.X/8.2.X)? In the "V-Series Systems Implementation Guide for EMC Clariion Storage" document, on P18 - 19, it clearly lays out how to direct connect a V-Series to a Clariion so I know at one time this was supported. To bolster support for my question: - The Flare version I'm using (04.30.000.5.524) is currently supported in the IMT. - On p4 of the "V-Series Systems Implementation Guide for EMC Clariion Storage" document. it says the CX4-120 is supported. So even though this document is older, direct attach was supported with this EMC model at the time the document was written. - The solution is a straight 4Gb FC (NetApp initiator) -to- 4Gb FC (EMC target) connection; meaning there will not be a speed mismatch issue. - How would the V-Series know it wasn't attached to a FC switch, and why would it care as long as it can see its targets? - Lastly, the CX4 will never be used for anything other than backend disk for the V-Series because we will have migrated all the data off to another FAS. Then we will rebuild the raid groups and LUNs (per NetApp best practices) just for the V-Series to be used as a DR/SnapMirror target. When I was architecting the solution I took my cues from this document (as it was the only thing I could find) and now I'm somewhat in a bind. My dilemma is we don't currently have an FC switch. The CX4 is running on iSCSI today. But we bought EMC FC cards to insert into the CX after the data migrations happen, so we could then rebuild the raid-groups/LUNs and present them over FC to the V-Series. I say "I'm somewhat in a bind" because I have a last resort option but I don't want to go there unless I have exhausted all other options. Any help would be greatly appreciated. Thanks.
... View more
No I didn't. I wasn't sure if it was a VMWare issue (SRM) or NetApp (SRA). Do you know if IPv6 is even supported by the automation flow of SRM 5.1? None of the documentation I read referred to IPv6, nor any quick internet searches. The only place it referred to IPv6 in the installation and configuration guide for 5.1 is in chapter 6 on page 58 when it refers to the public certificate for vSphere Replication. However, I know it does support IPv6 when customizing the IPs for the virtual machines themselves. This is explained clearly in the administration guide. Regards.
... View more
Hi Everyone, I just thought I would share the results of my hard work in hopes it will make others lives easier. I've been working on creating a VMWare SRM 5.1 with NetApp demo environment for over four weeks now. Most of my time has been spent learning SRM and some of the basics of VMware clustering and NFS mounting etc. I finally got to the point where I had: - 3x VMWare ESXi 5.1 hosts installed - 2x VCenter Servers 5.1 created with NetApp VSC Plugin Installed - ESXi Hosts clustered with VMotion working through VCenter - NFS volumes created and NFS export permissions configured on the NetApp(s) - NFS Exports mounted as ESXi Datastores - Win 2008 DC, Exchange 2007 and OCS 2007 VMs installed with 2 Win7 client VMs to prove it was a fully functional demo environment - NetApp's NFS Volumes SnapMirrored - SRM installed (ODBC connections to SQL, etc.) - SRM configured Moment of truth.. I ran the recovery test plan and it failed with this Error - Failed to create snapshots of replica devices. Failed to create snapshot of replica device /vol/NetApp_Datastore1. SRA command 'testFailoverStart' failed for device '/vol/NetApp_Datastore1'. Unable to export the NAS device Ensure that the correct export rules are specified in the ontap_config.txt file. (See the SRM Failures.doc in the attachment) I was at this point on Monday. I scoured the internet for this error and found very limited information. I also obsessed over the ontap_config.txt file on the VCenter/SRM server to no avail. What I did find is when configuring NFS every Volume/Export must be explicitly configured with r/w & root privileges. RO must be removed along with Anonymous access. The export should look like the "Permissions.jpg" in the attachment. From the command line the permissions should look like this: ONTAP-SRM-2> exportfs /vol/vol0/home -sec=sys,rw,nosuid /vol/vol0 -sec=sys,rw,anon=0,nosuid /vol/SRM_Placeholder -sec=sys,rw=10.18.202.31,root=10.18.202.31 /vol/NetApp_Datastore1 -sec=sys,rw=10.18.201.31,root=10.18.201.31 /vol/NetApp_Datastore2 -sec=sys,rw=10.18.202.31,root=10.18.202.31 That being said. How SRM works when you execute the recovery test plan is it creates a FlexClone of the SnapMirrored volume. When I executed the test plan and it failed I started looking at the NetApp on the recovery site. I noticed it created the FlexClone but it obviously didn't mount it as a datastore and start up the VMs. Funny thing is that when I manually mounted the FlexClone I could start up the VMs. Thinking that NFS is all about permissions... look what I found on the permissions of the FlexClone when I ran the "exportfs" command on the recovery side NetApp ONTAP-SRM-2> exportfs /vol/testfailoverClone_nss_v10745371_NetApp_Datastore1 -sec=sys,rw=10.18.18.31:10.18.201.31:10.18.202.31:10.18.203.31:fe80::20c:29ff:fea5:ac48:fe80::250:56ff:fe63:7152:fe80::250:56ff:fe66:28d7:fe80::250:56ff:fe66:fd86,root=10.18.18.31:10.18.201.31:10.18.202.31:10.18.203.31:fe80::20c:29ff:fea5:ac48:fe80::250:56ff:fe63:7152:fe80::250:56ff:fe66:28d7:fe80::250:56ff:fe66:fd86 /vol/testfailoverClone_nss_v10745371_NetApp_Datastore2 -sec=sys,rw=10.18.18.31:10.18.201.31:10.18.202.31:10.18.203.31:fe80::20c:29ff:fea5:ac48:fe80::250:56ff:fe63:7152:fe80::250:56ff:fe66:28d7:fe80::250:56ff:fe66:fd86,root=10.18.18.31:10.18.201.31:10.18.202.31:10.18.203.31:fe80::20c:29ff:fea5:ac48:fe80::250:56ff:fe63:7152:fe80::250:56ff:fe66:28d7:fe80::250:56ff:fe66:fd86 /vol/vol0/home -sec=sys,rw,nosuid /vol/vol0 -sec=sys,rw,anon=0,nosuid /vol/SRM_Placeholder -sec=sys,rw=10.18.202.31,root=10.18.202.31 /vol/NetApp_Datastore1 -sec=sys,rw=10.18.201.31,root=10.18.201.31 /vol/NetApp_Datastore2 -sec=sys,rw=10.18.202.31,root=10.18.202.31 Look at all of that junk in there.. for anyone who doesn't recognize this.. This is IPv6 and I am pretty sure SRM 5.1 doesn't work with IPv6. SOLUTION - on the ESXi hosts: To view whether IPv6 is currently enabled, run the following ESXCLI command: esxcli system module parameters list -m tcpip3 You will see, ipv6 property is set to 1 which means it is enabled. To disable IPv6, you just need to set the property to 0, run the following ESXCLI command: esxcli system module parameters set -m tcpip3 -p ipv6=0 We can now reconfirm by re-running our list operation to ensure the changes were made successfully. All that is left is to perform a system reboot, you can either type in "reboot" or use the new ESXCLI 5.1 command: esxcli system shutdown reboot -d 60 -r "making IPv6 config changes" After the ESXi host reboots see the FlexClone permissions: ONTAP-SRM-2> exportfs /vol/testfailoverClone_nss_v10745371_NetApp_Datastore1 -sec=sys,rw=10.18.18.31:10.18.201.31:10.18.202.31:10.18.203.31,root=10.18.18.31:10.18.201.31:10.18.202.31:10.18.203.31 /vol/testfailoverClone_nss_v10745371_NetApp_Datastore2 -sec=sys,rw=10.18.18.31:10.18.201.31:10.18.202.31:10.18.203.31,root=10.18.18.31:10.18.201.31:10.18.202.31:10.18.203.31 /vol/vol0/home -sec=sys,rw,nosuid /vol/vol0 -sec=sys,rw,anon=0,nosuid /vol/SRM_Placeholder -sec=sys,rw=10.18.202.31,root=10.18.202.31 /vol/NetApp_Datastore1 -sec=sys,rw=10.18.201.31,root=10.18.201.31 /vol/NetApp_Datastore2 -sec=sys,rw=10.18.202.31,root=10.18.202.31 ONTAP-SRM-2> Note IPv6 permissions are now gone = Success! SRM Test Recover Plan works! Enjoy.
... View more
Everyone I have the fix. It's a VMWare issue. By default ESXi 5.1 does not load the the vmkernel multiextent module. See the KB Below. http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&externalId=2036572&sliceId=1&docTypeID=DT_KB_1_1&dialogID=448358018&stateId=1%200%20448356307 The Fix: all you have to do is ssh into your host and run this command from the cli: # vmkload_mod multiextent The other stuff in this article didn't apply to me. All I had to do was run this command and my ONTAP Sims booted like usual, no issues. Unfortunately after a ESXi host reboot the multiextent module unloads and you have to perform this again to get your Sims to boot. If anyone can find a way to make this persistent with a switch or write a script that makes it load upon host reboot. I'm sure everyone would be very thankful.
... View more
..yeah I usually go back and forth from cmd line, to editing files under /etc & using System Manager so it was just something that I noticed when I looked in the GUI. On a side note I was at customer who was using the NetApp plug-in for VCenter and they had a SnapMirror schedule that they configured from VCenter and System Manager said that the schedule compatible with it, so it couldn't show it through the GUI. I just thought that was interesting. On a side note I did end up opening a ticket and apparently you can configure multiple schedules. (See the link below) https://kb.netapp.com/support/index?page=content&id=1011405&actp=search&viewlocale=en_US&searchid=1344446328627 Not much info here so I guess it's just trial and error.
... View more
Is there a way to create multiple snapmirror scheduals for the same mirrored volume? I need to create a sched for Mon - Friday 8AM-5PM every 5 min = 1024KB of bandwidth. On Sat-Sun (for the same volume) every 5 min all day long = 2048KB of bandwidth .. is this possible, or can you only have 1 schedual per volume.?I noticed that if you use System Manager it only shows 1 schedual per volume even if I put multiple scheudals in the etc/snapmirror.conf file. FILER-1:NAS-VOL FILER-2:NAS-VOL_SnapMirror kbs=1024 00,05,10,15,20,25,30,35,40,45,50,55 8,9,10,11,12,13,14,15,16,17 * 1,2,3,4,5 FILER-1:NAS-VOL FILER-2:NAS-VOL_SnapMirror kbs=2024 00,05,10,15,20,25,30,35,40,45,50,55 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 * 0,6 Something like this: Thanks everyone.
... View more
Yeah, I see what you are saying now. Page 31 explains it. I think the thing that tripped me up was they only show a 2 stack example. Thanks very much for your time.
... View more
Humm... I think I see what you are saying. So if I refer to p29 of the Universal SAS and ACP Cabling guide. See the image You are saying connect the top controller's e0P port to the top shelf's Square port, and connect the bottom shelf's Circle port to the bottom controller's e0P port? Thanks for clarifying.
... View more
Hi Everyone, I was looking for some clear documentation on how to cable the SAS and ACP connections on a DS4243 to a FAS2040 when I came upon the Universal SAS and ACP Cabling guide. For the most part this guide was helpful except when it came to the ACP cabling instructuons. The FAS2040 only has one SAS connection per controller so the guide was good enough to call that model out and make special reference to it. But when it came to the ACP cabling I found it difficult to follow. I understand that there are differnt cabling procedures depending if you have 1 controller or 2 controllers, attached to 1 disk shelf or multiple shelves; and the 2040 in itself is unique in that it only has 1 SAS port per controller, so you can only daisy chain the controllers to the shelves, but you cant create a complete loop because of this SAS port limitation. However, the Universal SAS and ACP Cabling guide is unclear on how to connect the ACP. It shows that you daisy chain the shelves (and make an inter-connection if you have multiple stacks) but it doesnt show the controller connections to the shelves. I guess it just assumes you know what to do there??? Its definitely not explicit. Can anyone point me in the right direction. What I have is an HA (2 controller) FAS2040 with 2 DS4243s in 1 stack. When I was done the only thing I could figure out was to daisy chain the ACPs in the exact same method as the SAS ports which in the end showed 2 active ACPs but only Partial Connectivity. I'm not convinced this is correct. Thanks.
... View more