We are using it very carefully on FMC configs (though in a stretched MC it's almost always a win) because it can introduce very heavy performance impacts (or even panic the filer) if one of your backend fabrics has some connection problems (links going down briefly from time to time, for example). Since we have a few customers with large distances between the sites (>50km) and some of them already had these kinds of problems, we're setting it to "local" by default on FMC and only switch to "alternate" if the fabrics/links turn out to be very stable (i.e. no port errors on the backend switches etc.) -Michael
... View more
You have to do it client-side, there's no way around that. There, it depends on what systems you have connected. ESX is easy, you can do it with Storage VMotion. Otherwise you probably need to create new LUNs and copy all files over somehow I'm not aware of any product that lets you migrate from 3rd party storage to NetApp -Michael
... View more
Make sure that you installed the NetApp Host Utilities for Solaris on all hosts. This looks like a timeout issue, i.e. your host loses access to the disk(s) during takeover/giveback for a short period of time. You can also increase the LUN disconnection timeout yourself (if you know how to do it in Solaris; I don't), or simply install the Host Utilities -Michael
... View more
I can only answer part of your question: Each filer has its own spare disks. So it's perfectly normal to see different spare counts for each filer. However, why the partner filer still thinkgs that 1c.58 is FAILED although the owning filer (filerB) sees the disk as working is beyond me. Maybe a simple "disk fail" followed by "disk unfail" (in diag mode) on filerB can resolve that issue. Otherwise I would open a case with NetApp if it bothers you too much. But as long as the owning filer sees the disk as usable (and not the other way round) it's normally not a problem -Michael
... View more
Depends on how you do the move, and from where to where. If you move files from one volume to another, the blocks will NOT be reused. If you move a file inside a volume, the blocks will get reused as the file basically only gets a new name. You have to do a "real" move though, a "copy & delete original" style operation will of course not work and will always result in new data blocks being used. There's no way around this as it is how WAFL works (i.e. block-level re-use only works inside the same volume) -Michael
... View more
No, it means you cannot hot-add the shelf to a loop containing FC shelves. You can, of course, hot-add a new DS14 shelf to a completely new FC port on your filer, as long as tthis port is configured as INITIATOR (not TARGET). See, for example, the DS14mk2 AT Hardware Service Guide (https://now.netapp.com/NOW/knowledge/docs/hardware/filer/210-02360_F0.pdf) on page 48, chapter "Hot-adding a disk shelf to an existing adapter in your system" hope that helps -Michael
... View more
You can use the Space Reclaimer from within SnapDrive to accomplish the same objective. And with LUNs you still have the same problem that any free space that you get in the volume (regardless of what triggers it: TRIM, Deduplication, ...) is not immediately available to ESX and you have to use LUN overcommitment to effectively use that free space. BTW I think it will take a bit more than "a few days" for vSphere 5 to be released. Last time I heard they were talking about a March 2010 release or, somwhere around March -Michael
... View more
What do you mean by "firmware for VMware"? Firmware is for hardware, so you probably mean BIOS, which you'll have to check with your server vendor. Or maybe you mean firmware for an FC HBA or iSCSI HBA, in which case you'd have to check the VMware HCL and/or look on the HBA's vendor pages If you can be more specific on what you're looking for, maybe we can help you better -Michael
... View more
This happens if you delete files (or snapshots) from inside your LUN. ESX only marks the blocks as "free" but on the NetApp, they still contain the old data and are thus "used". There's no easy way to "reclaim" this space on the filer, at least not on ESX. -Michael
... View more
eric.barlier wrote: With respect to your problem. As mentioned you cannot have 1 disk on one controller. the least amount you can have is 2 for a raid4 aggr. to accomodate a root volume. Just some nitpicking, it is perfectly possible from a technical point of view to have a single-disk root aggregate. However this is not documented, not supported and *will* get you into trouble if you try and open up a support case 😉 netapp10> sysconfig -r Aggregate aggr0 (online, raid0) (block checksums) Plex /aggr0/plex0 (online, normal, active, pool0) RAID group /aggr0/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- data 0c.00.2 0c 0 2 SA:B 0 SATA 7200 423111/866531584 423946/868242816 Pool1 spare disks (empty) Pool0 spare disks (empty) Partner disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- partner 0c.00.3 0c 0 3 SA:B 0 SATA 7200 0/0 423946/868242816 partner 0c.00.6 0c 0 6 SA:B 0 SATA 7200 0/0 423946/868242816 partner 0c.00.0 0c 0 0 SA:B 0 SATA 7200 0/0 423946/868242816 partner 0c.00.1 0c 0 1 SA:B 0 SATA 7200 0/0 423946/868242816 partner 0c.00.7 0c 0 7 SA:B 0 SATA 7200 0/0 423946/868242816 partner 0c.00.5 0c 0 5 SA:B 0 SATA 7200 0/0 423946/868242816 partner 0c.00.4 0c 0 4 SA:B 0 SATA 7200 0/0 423946/868242816
... View more
since this is a per-share setting you can change it for every CIFS share that you have. From the CLI you can do it with cifs shares -change SHARENAME -vscan -novscanread This command changes the policy to only scan on writes, not on reads. There's more info about CIFS virus scanning in the "Data Protection Online Backup and Recovery Guide" -Michael
... View more
This is a WAFL limitation and it is still present in 8.0.1. Block pointers in Snapshots cannot be changed by the filer. We always schedule our SIS jobs to run at 23:00, one hour before the nightly snapshot. This means that at least the nightly+weekly snapshots (which will be around for some time) are well deduplicated. The hourly snapshots will probably not be 100% deduplicated (i.e. changed data is not), but since they're not around for that long (a few days at most, normally) this is a tradeoff you can most probably live with. In general you'll simply have to try it since it depends heavily on your data volume, change rate, and so on. But you won't feel any ill sideeffects from doing it like this -Michael
... View more
I don't know any tool for viewing the most active files from the top of my head. but there is "nfsstat -d" which shows you misaligned vmdks, and these are more often than not the source of increasing performance problems (misaligned IOs multiply the IOs that your disks have to do) On the ESX side there's ESXtop which can produce files that can be opened and viewed in windows perfmon. There you see the IO generated by each vm, or on each HBA/Datastore hope that helps -Michael
... View more
Aah. If the source hass already released the relationship then you can simply do snapvault stop /vol/ossv_backup01/obsljtdb02 to remove the QTree and delete the backed up data -Michael
... View more
could you try it with only the secondary path? as in filer> snapvault release /vol/ossv_backup01/obsljtdb02 also, maybe post the output of "snapvault status" and "snapvault destinations", too. I don't think it's a bug, this would be a rather big one and thus it would be noticed almost instantly -Michael
... View more
I think there's something wrong with what you pasted here. It says that your primary path is "owbsljtdb02:/" and your secondary path is "dcbsnap02:/vol/ossv_backup01/obsljtdb02". But snapvault release tells you that the first argument should be the secondary path and the second argument the primary. So I *think* you have to type snapvault release /vol/ossv_backup01/obsljtdb02 owbsljtdb02:/ on your filer (sorry, but I don't have an OSSV system here to test at the moment) -Michael
... View more
Your primary path is something like FILER:/vol/volumeXY/qtreeZ, or in the case of OSSV, something like SERVER:/C:/ If in doubt, just copy/paste the output of "snapvault status" -Michael
... View more
Again, you're mixing things up. The 100k entries per directory limit as nothing to do with hard vs. soft links. It simply means you cannot have more than 100000 entries in one directory (minus the 2 entries that are always there for . and .., so in reality you can only have 99998 files). It really doesn't matter if it's hardlinks, softlinks, subdirectories, or anything else. The limit is 99998 *entries* in the WAFL directory file. The other limit that was mentioned above (Bug 292410) is something different, this one talks about hardlinks to one file (which can reside in different directories), and there, too, is a limit of 100000 hardlinks. -Michael
... View more
From my experience, the problem is worse with >90% full volumes than it is with >90% full aggregates. If the volume is too full, the background reallocation task can't do it's job correctly, and manual defragmentation (with "reallocate start") will also be impossible. This is the only reason why I always suggest keeping the volumes below 85-90%. Reallocation runs in each volume and needs free space to work with -Michael
... View more
Basically, you want to try to avoid that both the disks and the controller stop a the same time. This makes it easier (possible) for the OS to make decisions concerning a fail-over. When everything goes at the same time, you get a "split-brain" situation. Just some nit-picking: No, you won't get into a split-brain situation. You never get into that with NetApp. You get in a situation where the filer cannot be 100% sure that a takeover will NOT result in a split brain. "split brain" means you have two copies of your data active, one in each site. This *could* happen if the controller takes over automatically, but since it won't (you have to do "cf forcetakeover -d") you don't get into the split brain situation. -Michael
... View more
Check if fcp is running at all ("fcp status"). Also check to see what initiators your filer sees as connected ("fcp show initiators") Finally check that you have configured your (onboard-)ports as target ports ("fcadmin config") -Michael
... View more
Depends on the throughput to your datastore. If you use VMFS (LUN) datastores, I wouldn't reconmmend aligning more than 1 VM per ESX server (as the bandwidth from the service console is very limited). If you have NFS datastores and are aligning from a Linux system (i.e. not the service console) you can probably align 4-8 or more per filer, depending on bandwidth to your filer. -Michael
... View more
VSM is not possible between 32 and 64 bit. Even if you *could* move your destination from 32 to 64 bit, your SnapMirror Relationship would not work anymore because your source is still 32bit. You need to migrate both the source and the destination to 64bit, and then it is probably faster to just migrate the source and do a new baseline transfer (via LREP/SM2T if your bandwidth is limited) -Michael
... View more