1. Is live configuration the same as /etc/rc? Please show result of "ifconfig agr0" on both heads? 2. How do you determine that "no IP being re-homed"? What exactly happens and/or does not work?
... View more
0b is disk adapter on FAS270. If its initialization failed, it usually means open loop. Depending on which ESH module is installed your shelf may need termination too. It has to be set to 1Gb/s as well. So far it is hardware problem which in almost all cases I have seen was caused by incorrect loop termination setting.
... View more
I would check speed setting on shelf (should be 1Gb/s) and terminator setting on FAS270 head (should be on if no external shelf is connected).
... View more
So you are saying neither of my proposed solutions will work No, I'm just saying that I would prefer slightly different setup Also I was reading another doc that walked through the MPIO setup with screen shots on the Windows side, it used the same subnet as well. Nobody is perfect My guess is that the MPIO software handles the communications, even with everything on the same subnet? One point to note - source address selection is not equal to outgoing interface selection. So it is perfectly valid to have session with source S1 going via interface with address S2. You cannot tell which route your packets take by looking on netstat (or similar) output. Good news is that on W2k8 it could actually work. I am not able to find relevant Microsoft documentation, but the article http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx describes differences between weak and strong host models. In particular: If the source address has been specified, the source interface is known. The source interface is assigned the source address. That said, you still have single point of failure - your LAN switch (stack). You still have single VLAN that handles all traffic. Any issues on this VLAN that disrupt traffic will affect both pathes in your MPIO setup. So as already mentioned, you will probably have load balancing but you will not actually have complete fault tolerance. That is not worse then it was with NFS. It is just that iSCSI allows to do better ...
... View more
Loss of redundancy (i.e. failure protection) and decreased resources (i.e. possible performance impact). What answer did you expect? ☺ I am not sure I really understand the second question. You can run in takeover mode indefinitely if you like; nothing in filer will force you to do giveback.
... View more
just use the same subnet and then on my Windows 2008 R2 hosts just IP each of the two NIC's on the same subnet? In this case all outgoing traffic will be sent via one interface only, according to routing table. This is simply how TCP/IP communication works. Not only it won't offer any load balancing (I really do not think it is an issue with 10G); the main problem is it may defeat any error recovery attempted by MPIO stack. In storage world established practice is to use two completely physically independent fabrics for conneciton; then storage stack on host provides full end-to-end error detection and recovery (and load balancing). Head 1, single top level VIF that has 2 - 10gig interfaces (e4a, e4b). The main IP would be 10.50.1.1 and I would add an alias of 10.50.2.1 ? This means you will run two IP networks inside of single physical VLAN. Which now goes against networking best practices ; and and has the same issue of defeating the main purpose of MPIO - to have several independent connections. But in your case (having single switch stack) it could be enough for load balancing purposes.
... View more
the second controller will take over if the first one drops its network connection It can be configured on NetApp but it is not default. does it default to load balance requests between controller's There is no load balancing between controllers. Each controller is completely separate, it has own set of disks and exports own resources (shares, LUNs, etc). Client always connects to specific controller. This does not change during takeover - in this case surviving partner starts virtual instance of second controller, so for clients personality does not change. Any load balancing has to be done manually by administrator - i.e. distributing resources and clients between controllers.
... View more
If your networking equipment configuration allows single 4 port aggregate, yes, it probably makes more sense this way. May be it does not (e.g. you have two independent LAN switches; in this case you cannot create single LACP vif that spans two switches). To enable takeover both controllers must be up and running. From NetApp side clustering works fully automatically; there is little to do besides enabling license. You have to pay attention to networking configuration (i.e. - both controllers must have access to the same VLANs so interface takeover works); if you are using FCP, you usually need to setup multipath software on host side and/or tune it so takeover is really transparent.
... View more
Where and how igroups and mappings are stored still remains a great secret, apparently protected NetApp above everything. But one thing for sure – when you online volume (which is implicitly done when you online aggregate) NetApp rescans storage for LUNs. If your LUNs were located on the same aggregate it could have happened that any igroup mapping was removed from LUNs (assuming they are stored somewhere with a LUN). I really wish someone from NetApp would finally explain in simple words where igroups and mappings are stored. In general, what you have done is the wrong way to do upgrade. You normally just connect old shelves to new head, re-assign disks and come up with exactly the same config as before. Then – if I have reasons to think newly delivered system has own root volume – I connect new shelves online and simply destroy new root volume/aggregate if it appears.
... View more
Unfortunately, “no” to both questions. It is better to use rsh/ssh connection for long running tasks so as to not block console; the same can also be used to script your activities (using host side scripting language). There are also development kit with C/Perl/Java/C# bindings as well as powershell toolkit available.
... View more
There is checkbox “Do you want to limit the maximum disk size to accommodate at least one snapshot?”; if it is enabled (default), SD tries to additionally reserve 100% of LUN size for possible snapshot creation. If you are not going to create any snapshots, you can uncheck it.
... View more
I am not aware of any limits imposed by NetApp. Host could be limiting max supported LUN size though. You did not explain what did not work for you. Could you give more details what you have done and what was the result?
... View more
If the question is about FC technology, why should it not be supported? For NetApp it is just another initiator WWPN to map; NetApp is not even aware about NPIV used somewhere on access path. In fact I have several configurations which use leaf switches in access gateway mode which effectively means NPIV on core switches.
... View more
We used the root volume from the FAS2050 and initialized it once it was connected to the new head What exactly do you mean under "initialized it"?
... View more
You can create aggregate consisting of several raid groups (up to the max aggregate size limit), but as already mentioned it is far better to use RAID_DP; in which case you can have single raid group with 12 disks.
... View more
the default raid group size is 8 and the max is 14 with single parity Nope, we are both wrong. For SATA max raid4 group size is 7. This limit did not change in 8.x. You are right that for RAID_DP limit was raised to 20, although this probably does not matter for 14 disks anyway
... View more