2017-02-09 05:26 AM
Currently I am working on a project for a customer to look at setting up DR for them.
We are running ONTAP 7-Mode 8.2.4P5 on a FAS8080HA. Majority of NFS workload is on Controller 01, and CIFS on Controller 02, although there are little bits of NFS/CIFS spread across both Controllers. There is a vfiler running at the moment on Controller 02 which is for CIFS shares sat on a different domain, this vfiler also has a different (non-default) ip space setup.
At the moment, eveyrthing else is sat on the root vfiler0 on both Controllers. To not have to cook up some horrible scripted DR nightmare on the root vfilers, we are are looking to leverage the functionality and (relative) simplicity of vfiler-DR.
Of course to be able to do this, we now need to look at the unenviable task of re-organising the existing (flat) storage into vfilers, so we can then replicate the data and config to the DR site and use “vfiler-DR” properly as it is intended.
As this data is all NAS based, and due to the nature the of the environment setup, and how it is nestled together and intertwined quite tightly, I am currently thinking about creating only 2 vfilers max per Controller – a CIFS vfiler, and an NFS vfiler, to keep things simple.
With the above in mind, I have a few queries I hoped some 7-Mode Netapp savvy folk could advise on so we can get on with planning the future layout:
1) If we create new CIFS and NFS vfilers, instead of creating a dedicated ip-space for each vfiler, can we just use the default-ipspace on each hosting Netapp Controller?. I have read a few bits and pieces online that seem to imply it’s quite possible, but isn’t necessarily the done thing.
2) If we do use the default-ipspace, would the two vfilers be able to “talk” to one another?. The reason I ask is that for instance we have at least a few servers here and there which access CIFS shares and Unix NFS shares, and I am just wondering if they will be able to use both protocols, and talk to each vfiler, the vfilers might need to talk between each other as well. There might be servers wanting to access data on an interface or volume owned by the other vfiler, I am not sure this will work?
3) In the main NFS environment, we use Oracle LDOM technology and this has a couple of “swap volumes” which contain the Solaris swap data/pagefiles (almost identical I think in principle to ESX swap areas). As these are mounted on the Solaris LDOM primaries via NFS (and therefore technically fall into the NFS vfiler category) I am just wondeing if we will need replicate these volumes via vfiler-DR also, or if they can be excluded (and perhaps brought up manually in DR?).
My current understanding is if I was to create a dedicated ip-space in each vfiler, they could definitely not talk to each other (as that is the idea) but I just wanted to check this approach, and seek advice on the cross-talk/communication between the vfilers.
Likewise for the Solaris “swap volumes”, my current understanding of vfiler-DR is that any and all volumes served by the source vfiler have to be replicated to the other site for them to be brought up in a DR situation, however this is going to waste a lot of replication bandwidth for swap file data that we don’t actually need to replicate if we can avoid it.
2017-02-09 05:42 AM
If you are not running a true multi-tenancy environment stay far away from ip spaces, especially in 7-mode.
If the CIFs and NFS are on the same domain, and the same spindles, then what benefit are you getting out of splitting them out.
The only reason to split them out is if you want to invoke a vfiler-dr on separate work loads. For example. App1 and app2, two different RPO's / RTO.
with 7-mode, all volumes must go with vfiler-dr.
Lastly, remember when you configure vfiler-dr, remember to change your snapmirror.conf file b/c they will all default to 0-59/3 * * * - that's every 3 minutes. That's just nuts.
Also, i'm a huge fan of 7-mode, but i made the jump to cDOT recently, with 9.0, and you should start planning this for the future. 7-mode is end of engineering support at this time, plus for cifs no smb 3.
2017-02-09 07:38 AM
Thank you for the quick reply. I was wanting to avoid ip-spaces if possible. The docs didnt say you had to use them, but it is implied you normally would.
While yes, technically CIFS and NFS share the same domain. I did think about having just 1 vfiler (like a prod filer) which gets mirroed to the dr-vfiler and that would contain all volumes on the prod side, and serve the purpose, however you would have to fail the lot over (CIFS and NFS) at the same time.
There is actually a different current requirement (didnt muddy the original post with this) that the customer might want the ability to fail over individual applications separately of each other, so this would actually mean we could be in a position to have 10 or so vfilers, one for each application, with its associated LDOM/Oracle DB/OS NFS volumes and VLANs and that would allow the individual applications to be failed over individually in DR, rather than an all or nothing DR.
I thought we would have to lump the swap volumes in with the rest from what I read - that is a bummer as it is unecessary replication really. I read about the snapmirror.conf as well but thank you for highlighting that again.
Also we have a rather large Protection Manager driven OSSV estate (for the Unix hosts and LDOMS mainly) which isnt going anywhere any time soon, so because of that, we cant move to cDOT yet, much as I would like to, as svm-DR would be a big improvement for us here, and the general fluidity it would give us, but we are "stuck" on 7-Mode for the foreseeable future.