What ONTAP version is installed? Even/Odd disk ownership is not the best practice for AFF systems. Best practice is 0-11 Node A, 12-23 node B. If ONTAP 9.2, you can reinitialize to best practices using the new Option 9 boot menu. If ONTAP 9.0+, use this KB to reinitialize with root-data-data partitioning. Step 9 shows the disk assignment details. https://kb.netapp.com/support/s/article/How-to-convert-or-initialize-a-system-for-Root-Data-Data-Partitioning?t=1507373282342
... View more
A few points... 1. you should have several management IP addresses: cluster_mgmt, node1_mgmt and node2_mgmt. Q: What are the IP's for node1_mgmt and node2_mgmt? 2. HW Assist requires a communication path between each other's SP's and node_mgmt IP addresses, i.e. node1 SP <-> node 2 node_mgmt LIF node 2 SP <-> node 1 node_mgmt LIF NOTE: This means the service processor's 10.x.x.x subnet and configured default gateway must be able to reach the 134.x.x.x subnet where your node_mgmt LIF's are... ping the 10.x.x.x SP addresses to help confirm this. 3. The storage failover modify commands need to specify the -node to configure each node's partner node_mgmt LIF correctly... This is a cluster shell command (i.e. nas::>) storage failover modify -node nas-01 -hwassist-partner-ip <node-02's node_mgmt IP> storage failover modify -node nas-02 -hwassist-partner-ip <node-01's node_mgmt IP> 4. system node run -node nas-01 cluster shell commands place you into the node's "node shell". That's not necessary for this configuration exercise.
... View more
I believe group creation and maintenance is done by opening a non-technical case with the group membership details provided to the SCA team (assuming you are authorized to request them).
... View more
To recap... 1. IOM3 never has and never WILL support inband-ACP. this is due to a hardware limitation. 2. IOM3 support (I.e. monitor/manage storage with IOM3/DS4243) was removed in ONTAP 9.2. It is restored in 9.2P1 and will be in 9.3. 3. With IOM3 and IOM6 mixed storage your only option for full ACP features is to use OOB-ACP (and cable up the ACP ports).
... View more
+1 on the bootarg. This bootarg is documented in the "Adding a second chassis to an existing HA pair" doc. You would just be configuring things in reverse/opposite, if going the other way. https://library.netapp.com/ecm/ecm_get_file/ECMM1280401
... View more
There's a non-public bug Feature Request that allows ADP to include external shelves on FAS26xx, starting with ONTAP 9.1. If you're running ONTAP 9.2, the new Option 9 (ADP) boot menu option is the cat's meow.
... View more
What is your goal? L2 redundancy, L3 (router) redundancy or both? To summarize from a L3 redundancy perspective... with these protocols, you have a virtual router IP address AND a virtual ethernet MAC address for this router. When one fails, the other takes over advertising the virtual IP and using gratuitous ARP requests to "teach" everyone the new location of the virtual MAC address. As you currently describe it, the NetApp nodes will not have L2 redundancy. That's OK - but if the single link to the switch fails, or the switch itself completely fails (as opposed to the router interface not working or downstream paths not working), the connectivity will fail. To configure L2 redundancy you'd dual-home the NetApp nodes to each switch, but you'd need a multi-switch link aggregation feature on the switch side. Extreme's MLAG feature, Cisco's VPC. The usual requirement on the edge/host/server/ONTAP side would be that the 2 ports are configured as a LACP (802.3ad) port-channel.
... View more
Hi back, The idea behind all of these "router redundancy" protocols is you can have multiple physical routers cooperating to provide a reliable "virtual router" gateway IP address. The protocol itself runs between the routers, only. All the hosts (like the AFF A200) need to know is what the IP address of the virtual router/gateway is - just a configuration item when setting up the network LIFs.
... View more
On the NetApp Support Site, if you search for configure vlan interface port ontap 9, one of the first hits is this page from the ONTAP 9.0 Network Management guide: Configuring VLANs over physical ports
... View more
The doc link Alex pointed you to will get you going for active/passive in short order. If you're not dealing with multiple workloads and you don't have really high network I/O demands, active/passive is a reasonable choice. Even when active/active, you don't want to have either node handle much more than 50% of the total workloads, anyway (if you want acceptable performance when a node is down).
... View more
8 HDD's is the minimum, but you'll have a better layout with root/data partitions if you do put the SSDs in the external shelf. This will allow root aggregates to spread out to 3d + 2p + 1s partitions on each node.
... View more
+1. I can think of a couple reasons why 9.0 can't be chosen, assuming all the systems are cDOT. FAS3240 or V3240: Not supported as of 9.0. DS14 storage shelves: Not supported as of 9.0.
... View more
Can the 2 node_mgm LIF's ping each other? It looks like the wrench port on node 01 is connected to a switch, but it may not be in the correct VLAN on that switch.
... View more
Be careful... You shouldn't have to worry about the SP version. The ONTAP version you upgrade/downgrade to will automatically do the same to the SP based on the version of SP that is bundled. Also, 3.1.2P1 is exactly what you want for ONTAP 8.3.x See: http://mysupport.netapp.com/NOW/download/tools/serviceimage/support/ServiceProcessorSupportMatrix.shtml I have to ask... why do you specifically need to run 8.3P2? Per recommended ONTAP releases at this time, 8.3.2P11 is the one you want to run for the 8.3.x ONTAP release family. https://kb.netapp.com/support/s/article/ka61A0000008aBr/Recommended-Data-ONTAP-Releases-on-the-NetApp-Support-Site
... View more
It depends on whether you want to have both nodes serving data (active-active) or you're happy relegating the 2nd node to a passive "backup" role. If active-active, you'd have one or more aggregates configured on each node. This will have a bit more "parity disk partition" tax than using all data partitions for one big aggregate, but you have more data serving performance with both nodes active. I'd look at these docs... Root-data partitioning concept: http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-concepts/GUID-B745CFA8-2C4C-47F1-A984-B95D3EBCAAB4.html Manually assigning ownership of partitioned disks http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-psmg/GUID-07302AD3-F820-48F7-BD27-68DB0C2C49B5.html Setting up an active-passive configuration on nodes using root-data partitioning http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-psmg/GUID-4AC35094-4077-4F1E-8D6E-82BF111354B0.html I can't speak to your LUN question... but I wouldn't think it matters. See section 5.3 here: TR-4080: Best Practices for Scalable SAN https://www.netapp.com/us/media/tr-4080.pdf
... View more
If you're willing to consider 9.2 (GA), there's a nifty new boot menu option for re-partitioning systems with "one click". From the release notes: https://library.netapp.com/ecm/ecm_download_file/ECMLP2492508 Beginning with ONTAP 9.2, a new root-data partitioning option is available from the Boot Menu that provides additional management features for disks that are configured for root-data partitioning. If you want to stick with 9.1P5, this KB article takes you through reinitialization process. https://kb.netapp.com/support/s/article/ka31A00000013aJ/How-to-convert-or-initialize-a-system-for-Root-Data-Data-Partitioning Note: The KB works well for FAS (non-AFF), too. You'll just end up with root-data, not root-data-data, partitions. When it's done, each node will "own" half of the 20 disks from a container perspective, and the smaller P2 root partitions will be used to build the root aggregates (and leave 1 spare). The large P1 data partitions will be "spare" for creating data aggregates from either node, as you see fit.
... View more
+1 for robin's comments on the P2 partitions not being balanced across node 1 and node 2. Q1: What ONTAP version are you running? Q2: Do you plan to run in an active-passive configuration or do you want both nodes to normally serve data from their own aggregates (active-active)? Q3: Are you in a position to destroy the data on the system and reinitialize? It might be the fastest path to configuring the system according to best practices.
... View more
SMTP AutoSupport does not support SSL/STARTTLS. Consider configuring AutoSupport to use HTTPS to send to NetApp. There is no other solution for internal AutoSupport SMTP to their own e-mail destinations.
... View more