The main reason you would create a lun in a qtree rather than directly in the volume is to enable you to use qtree snapmirror instead of volume snapmirror. Also many of the snapmanager products need them to work. Given there isn't really any overhead in using qtrees, I'd stick to doing it that way.
... View more
HI Luke There are two schools of thought here, one says how a separate aggregate for vol0 and the other says create one large aggregate per filer (within the limits of aggregate size / disk type / size). I'm definitely of the latter school, especially with so far disks relatively. Creating another aggregate per head is going to "waste" 4 disks and unless you going to be adding more shelves very soon could actually make disk access slower.
... View more
Aborzenkov is of course 100% right, but may be I can elaborate a bit. The way I explain to colleagues with backgrounds using lesser storage, is with an HA NetApp you're getting two autonomous storage devices which are capable of taking over each others disks in a failure situation. You don't have to split it 50/50 but each head will normally require a minimum of 4 disks (1 for data, 2 parity and a spare). If your need is for a single large area you could assign 20 disks to one head. But usually with a bit of thought you can split your data more evenly.
... View more
Don't think it does, later models like the 32xx series do. But even if there is you won't be able to use it as you want. I'm pretty certain, currently, there is no documented purpose for the USB port on a NetApp.
... View more
A couple of other things to consider: The 2240 has limited expansion, a single slot per controller. It's a choice of either a dual-port 10GbE or 8Gb FC mezzanine card. You've 4 x 1Gb ethernet ports per head, bonding them together using LACP or similar might be a more cost effective solution.
... View more
Hi Once you create the snapmirror relationship and it completes the initial replication, the LUN will be there. What won't be there are the igroup settings.
... View more
I think you've got yourself a bit confused. If you're using snapmirror (volume or qtree) you don't need the lun on the target volume, the initial snapmirror does that for you. snapmirror also copies the entire lun (typically the whole volume) block for block on the initial set up and then any changed blocks on the schedule you set. In a DR situ you need to mount that lun on the DR server, and will need to have set up the equivalent igroups etc on the DR filer.
... View more
My understanding is that it is presented to VMWare like an RDM not as a VMDK, If you don't need to use snapmanager for exchange then you could use VMDK's. This posting seems to confirm that https://communities.netapp.com/thread/13932
... View more
Hi Can't speak for v7 and Exchange 2013, but I can relay some of our experiences with 6.x and Exchange 2010. Apologies. if I teaching my granny to suck eggs here. With regard to storage presentation; if you want to use snapmanager for Exchange you'll need to use Snapdrive. The choice is then to how to connect from the VM to the storage, the simplest way is running the MS ISCSI software initiator within the VM but this does have a performance hit, as with any other server application it's going to be dependent on what splice of the CPU it can get. If you present the storage to VMWare via ISCSI instead the connection is running at hypervisor level, much like it would with a fibre channel connection, this configuration works and is supported with the snap products.
... View more
Hi I think most Netapp users would favour reqqme914's advice over Henry's. My added advice would be, beyond the first 20 or so, only allocate the number of disks you need to an aggregate (ensuring they're at most 80% used/ allocated), Nothing's more annoying than having to a half empty aggregate on one head and being tight for space on the other.
... View more
To answer your first question, replicating NetApp to NetApp is building to the O/S although is licensed separately like many NetApp features. Replicating between NetApp and non-NetApp requires extra software, that you either buy one of the many vendors or you write it yourself in some scripting language or similar. To your second question: I wouldn't say there'd be problems, especially if you are sticking to products in NetApp's interoperability list. But I think anyone would acknowledge, the more different vendors in a solution, the more room for issues. Think I've helped you as much as I can.
... View more
Sounds like you are not really looking for DR, just off-site back up? If you just want back up, then all you need is some backup app software and some storage the other end. Then in a DR situ you go through a manual restore process to get back up and working. Of course the hardware on the DR site and any software needs to support all your platforms. With a NetApp snapmirroring to another NetApp that's guaranteed as it's block level replication. Using different hardware will make things more complex. If you want to buy IBM, buy the IBM hardware that's rebadged NetApp hardware!
... View more
Let me clarify. You'll need the VFM software (licensed by size of data now, I think) and a server or two to run it. You don't need DFS but if you have it VFM should be able to manage it and make DR switch over easier. Not sure if it can interact with NFS, you may need to look at DNS methods to achieve DR switch over here.
... View more
Simplest thing would be to have a NetApp the other end, this would enable the use of things like SnapMirror, but it doesn't need to be the same spec, a 2240 could be a good choice. Using non-NetApp hardware is possible, NetApp had a product called VFM which was a rebadge of Brocades StorageX. NetApp have EOL'd it, but a company has brought the code from Brocade, they're a bit low profile http://www.datadynamicsinc.com/products/storagex-platform/ Bandwidth is going to depend on change rate and growth of data, Snapmirror is very efficient sending only changes at a block level, other solutions will likely be file based. Cross platform depends on how the storage is presented and what your DR system supports. CIFS / NFS shouldn't be a problem, LUNs could be more of a challenge,
... View more
The answer depends on many factors : What you are trying to achieve? What sort of systems you have? How much downtime you can tolerate in the case of an outage? How far apart are your sites? Unless there's NetApps both ends your options are limited, basically data back up using third party software and a lot of work. If you've a fat WAN pipe to the second site within 100km you could run synchronous replication, combine that with VMWare SRM or similar and you approach a near zero downtime solution.
... View more
No, having a second controller isn't going to help. I am assuming you using 8.1 in 7 mode, cluster mode could be different. If you've spare disks, at least 3, you could create a new aggr snapmirror your current vol0 to there, set it as the boot volume, kill you aggr, snap back etc, never needed to do it myself, but is meant to be fairly straight forward.
... View more
Is it your only aggregate on that head? does it contain the vol0 volume? If it is, it'll be like a fresh install so you would lose SSH access to filer and would need to use a serial connection or the SP port. You added disks to an aggregate but you can't take them away.
... View more
Was the host connected directly before or via fabric switch? What mode are the ports in target or initiator (fcadmin config)? Are you using snapdrive? Is the filer bound to AD correctly (snapdrive using AD for authentication)
... View more
I agree, unless there's very different pricing in your region, the support costs for one year on a 2040 will be almost the same as a new 2240 with 3 years support.
... View more
I wonder anyone has any experience/expertise they can share. We’ve a FAS3240 HA pair running 8.0.2P6 7-mode, but likely to be upgraded to 8.1.3 7-mode soon. Currently each is connected to a separate Juniper switch stack via an LACP VIF of two 10Gb connections, as in the first diagram. As switch stack reboots (for firmware upgrade etc.), cause an outage, what I want to do is connect them as in the second diagram and then have a VIF of VIFS. I understand this is possible, but the question is how do you ensure that under normal circumstances traffic only uses the 2 x 10Gb connection. With Juniper switches you can’t span LACP groups across switch stacks.
... View more