First, snap reserve does not limit amount of blocks, consumed by snapshots. It limits amount of free space as seen by clients. When 100G volume has 20% snap reserve, clients will believe voume has size 80GB. This is done mostly to avoid confusion in free space calculation. But whatever snap reserve is set to, snapshots can still spill over and consume all available space. Fractional reserve ensures that LUN will have free blocks to write into. Let’s assume you have 10G LUN on 15G volume. You fill it up and make snapshot. 10G are frozen and cannot be overwritten, so next 5G will fill up remaining free space and OS will complaint trying to write anything beyond that (file system will be unmounted, corrupted, may be OS crashes). Snap reserve does not protect you because as explained nothing prevents snapshots from “spilling over”. When fractional reserve is set to default 100%, snapshot creation would fail telling you not enough free space.
... View more
What about just creating top-level directory, moving content of qtree into it and then renaming to original qtree name. Moving files between directories is pretty fast, even for large number of entries. There are some things to watch out doing it, but this would be the fastest way.
... View more
1. Yes 2. Beyond standard configuration for ESX you need SnapMirror, SnapRestore and FlexClone does not hurt as well; otherwise it is pretty much standard from NetApp point of view. Check NetApp IMT and VMware site for supported combinations of versions etc 3. SRM goes on top of SnapMirror 4. It provides you with single button to do all steps required to fail over VMs from one site to another. You can customize it to include any script (e.g. to modify VMs network settings if required), you can run drill-down to test failover without disturbing production VMs (that is where you need FlexClone). The replication itself is done by SnapMirror. 5. Correct 6. It increases administration overhead, it makes deduplication less effective and you may hit limit of number of volumes earlier then number of VMs (or, I guess, you hit limit for datastores on ESX first; I believe it is 32 in total).
... View more
Yes, it is workable. If you use SnapMirror to replicate from A to B, you can later reverse direction to replicate from B to A. Assuming data on site A physically available, reverse replication will transmit only changes since last succseful replication from A to B. It is better if every VM is completely contained in one volume to ensure replicated image is write-order consistent. As already mentioned, SRM largely automates everything for you.
... View more
If approx 100mb is generated in a hour, then approx 400mb could be generated in 4 hours Not neccessary. If every hour the same 100MB are overwritten, both 1 and 4 hours snapshots will have exactly the same "size". In general, longer intervals tend to have "compression" effect. Unfortunately this is not true for the case of Oracle archive logs because here nothing is ever overwritten; so the amount to transfer will always be exactly size of archive logs since last run.
... View more
Every filer sees only those disks that have been explicitly assigned to it (disk assign). On those disks exactly one root volume exists. It may happen that if you add disks from another system and assign them they contain foreign root volume. I do not know how filer handles it and which one will be selected during boot; when this happened once on customer system foreign root volume was marked as "next root" meaning filer would boot from it next time; we quickly destroyed it as it was not needed
... View more
It could theoretically be scripted ... something along the lines create flexclone of original datastore present it to (another?) ESX mount vmdk's snapshots on backup server backup using whatever software and policies are required This would give incremental backup of every snapshot. Such backup could even be offloaded to non-ESX server - there is nice NetApp feature that allows presenting file on volume (vmdk-flat in this case) as LUN. Of course I completely agree that snapshots are best suited for day to day incremental backups.
... View more
ndmpcopy will copy file ACLs. It won't create shares on destination and replicate share ACLs. You could look at securecopy (available on NOW toolchest) for copying share definitions between filers. There is no alternative to ndmpcopy (on NetApp) if you need to copy only several directories. For whole volume you can use vol copy - it should be much faster; for whole qtree you could use QSM but it requires extra license.
... View more
The red color messages have nothing to do with NetApp. NetApp supports TCP/IP suite; may be your network includes IPX, AppleTalk, NetBUI (hopefully, not ) or anything else and this traffic leaks to NetApp. If it of concern, you have to sniff traffic on ports connected to NetApp and analyze it. The first step would be to sniff traffic on any port in the VLAN and check for excessive non-IP broadcasts. Regarding second warning - it has been discussed in length recently here (do not have reference handy). Basically, different NFS/CIFS operations have different length, most of them being far smaller than 1500 bytes. I am not sure anything can be done here.
... View more
From NetApp side there is no problem to increase (or decrease) LUN size up to 10 x original size. Host behaviour depends. W2k3/w2k8 can easily recognize and use additional space online; you have option to either extend existing partition or create another one. W2k8 can even shrink volume. You do not need SDW for that (but of course need to know how to do it without it) SDU does not support LUN expansion (did not last time I checked). It supports volume expansion only together with host based volume manager and will create additional LUN and extend volume with it. Under Solaris there is no official way to extend existing disk; the only way known to me is to use VxVM 4.x or newer that supports it.
... View more
The recommendation for more than 3 drives (7 drives) for snapmirror is when we are using snapmirror sync/semi-sync where the cplogs and nvlogs get written to the root aggregate and more spindle I/O is needed. In current DataONTAP logs a written to aggregate that hosts target volume.
... View more
Use SDK to create custom output format. SDK is available for C, Perl, Java - not exclusive list - I believe Powershell as well and contains plenty of samples, including how to access snapshot information; you just have to add in desired output format.
... View more
Often people simply don't *have* a second system to put the disk into. And even if they do have one, it is normally a production system where you can attach and zero the disks, but not remove them while the system is running Yes, but the question was "how to zero disks before putting them in production system". I just observed that one could zero disks after putting them in production system
... View more
“No” to all your questions. What you could do, is to add PAM(II) module; depending on workload this may cut down latency quite noticeable.
... View more
Why exactly would you want to zero them? You can just put them in another system, destroy whatever aggregate will be assembled and zero spares there. The only slightly annoying effect is support call back (depending on your support agreement details) after they see asups for “broken” aggregates.
... View more
Do you need share ACLs or files ACLs? Share ACLs are shown in “cifs shares” output; and files ACLs can be listed using fsecurity command on filer.
... View more
Sorry, there was misunterstanding indeed. Yes, I believe both your diagrams are possible; I'd still get support statement from your NetApp representative though
... View more