You say it's a huge invest to replace those 300 disks, but if you extend the warranty / support contracts, it will probably be comparable in pricing. At least that's what we're seeing with our customers. We have almost no customers who buy another 3 years of warranty for their existing disks when they see that they can get a new system with warranty for the same price
... View more
There's no hurry to upgrade to cDOT, 7mode will be supported for quite some time. So you can safely migrate to cDOT the next time you do a hardware refresh and upgrade your controllers. Then you can migrate via the 7mtt -Michael
... View more
It's usually easier to just replace the disk though 🙂 hat's what we do with our customers, they pay a lot of money for NetApp support and replacing a disk every now and then is perfectly fine -Michael
... View more
According to the system configuration guide (no longer available online) and the /etc/sysconfigtab files, for a 6040HA, the NVRAM card has to go in slot 1 for HA. If it's in slot 2, it's a stand-alone configuration. Try swapping the card into slot 1 and see if that helps If not, please post the complete boot messages that the system spits out, as well as the "sysconfig -av" and "sysconfig -cv" output -Michael
... View more
We have a few customers who were hit by that problem. You can see it if your "sis status -l" prints out incredibly huge numbers for "stale metadata" (in one case we had like 3500% stale metadata, but everything over 30% or so might indicate a problem) If you're hit by that bug, I found that the only solution to 100% fix it is the following: disable SIS on the volume(s) in question: sis off /vol/volumename upgrade OnTap to a newer version (8.1.2P4 is recommended as per the KB article, but newer is better, I'd suggest going directly to 8.1.4Px) delete the SIS database priv set diag; sis reset /vol/volumename re-start sis sis on /vol/volumename; sis start -s /vol/volumename This has always fixed it for us. Note that we had a few cases with 8.1.2P4 where a simple "sis start -s" after the upgrade did not help; we had to do a "sis reset" -Michael
... View more
I guess you didn't read the OP's post. He specifically said justin.smith wrote: the Datastore is empty. No snaps,no dedupe. Also, the dedup bug you mentioned doesn't occupy that much space in the volume (75% in this case!). But without any info on the OnTap version of the OP, and/or the output of "aggr show_space", we cannot do any more debugging here I think... It could also be that the volume was once thin-provisioned and resized to a ridiculously large size (16TB or even bigger); if you do that, your metadata grows a *lot* and the space used by the metadata will not be freed when the volume is being shrunk again, which can result in something like what the OP sees -Michael
... View more
do an "aggr show_space -h" and post that here. Could it be that the volume has been very large (say, 16TB or bigger) and has been resized down to 1TB?
... View more
I don't understand your question. ndmpcopy works on WAFL level, not on the filesystem inside the LUNs. Or do you mean installing an open source ndmp copy tool inside the client and using that to transfer the data? In that case it would be easier to simply use robocopy (Windows) and/or rsync (Linux/Unix)
... View more
Some things to look out for - long ranges where the Disk Util is around 98% (i.e. more than a minute or so) when the filer does *nothing else* (i.e. no SnapMirror, no backup, not much client activity goig on etc.) might indicate problems with disks or RAID layout - multiple consecutive CP Types with a capital B should also be avoided (generally, when you see ~5 or so consecutive B's you might have a problem with your aggregates) Note that these only *hint* to problems, they are not definite indications -Michael
... View more
No, this is indeed a hardware problem. It has to do with the drive firmware not reporting media errors in time (note that the drives in this post are firmware NA00, while NA03 is the latest) See http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=606576 for the bug. There were some TSBs sent to partners last year that mentioned this problem (I don't remember the number right now) -Michae
... View more
just do "vol size" on the destination volume with the new size. It will tell you that you just set the new "maximum size" for the volume and that it needs another snapmirror upgrade to get the new size. Simply run a new snapmirror update and it will automatically be resized to the same size as the source volume -Michael
... View more
It looks like something (script? something connected to the serial port?) repeatedly tries to login with a wrong username/password. Please check your /etc/messages file to see if there's an IP address mentioned there. If not, check the serial port and disconnect it (if there's anything connected) -Michael
... View more
ndmpcopy will almost certainly not work, because it copies the LUN geometry as well, so the new LUN will have the same geometry (and limitations) as the old one. Your only option is to do the copy on the client (i.e. create & map a new LUN, copy all data over from the client, unmap & delete old LUN) [*] To avoid this in the future, always create your LUNs with 2TB size and immediately resize them down to your desired size. In that case the initial size (2TB) will be used for the geometry and everything will be fine. -Michael [*] There is an undocumented way to change the LUN geometry which would allow you to resize it even further, but in that case you have to be absolutely sure that nothing on that LUN uses the old C/H/S addressing scheme but only LBA. Otherwise you will lose data.
... View more
The FAS3140 has an aggregate limit of 75TiB in 8.1.x The maximum (optimal) RAID config, according to the "Storage Subsystem Technical FAQ", is 2 RAID groups with 17 drives each. That would leave you with 1 aggregate with 34 disks in two RAID groups and 1 aggregate with 12 disks in 1 RAID group (leaving 2 spares) That's probably what I would go with, except if you want (almost) equal space for each of the two aggregates, in which case your Idea (22 Disks for aggr1 and 24 disks for aggr2) looks good, although you'll loss a few TiB in total because of the two additional Parity/DParity drives for the fourth RAID group -Michael
... View more
Simple answer: don't. Just let the filer run until the very last second, when your USV shuts down. You won't lose any data thanks to NVRAM. Just make sure that all your other servers accessing the filer are being shut down. P.S.: The integrated NetApp UPS deamon is not supported anymore anyway in more recent OnTap versions, since there's no need for it -Michael
... View more
In that case check for differences in the following areas - export policies - volume security styles - vserver default users for NFS - vserver name mappings, local-users and local-groups -Michael
... View more
try showmount -e 10.10.10.74 you should see your exports then. Also check "options.nfs" and make sure that UP is disabled and TCP is enabled (as UDP has a nasty bug on some OnTap versions). Try mounting explicitly with "mount -o proto=tcp 10.10.10.74:/... /mnt/..." -Michael
... View more
I'm not sure you can migrate that easily from a V-Series (V3140) to a regular FAS... And since there is no V-Series license for the 2240 (i.e. no V2240) you will have to do a little bit more than just swap heads -Michael
... View more
Can you give the exact command lines you used for creating the OSSV backup and for restoring it? LREP_reader and LREP_writer are a little bit picky in their syntax
... View more
"snapvault stop" deletes the qtree on the destination, don't do this if you don't want to re-baseline. try the method Paul suggested, i.e. snapmirror you SV secondary volume into another volume. The only thing you need to do afterwards is snapmirror break and "snapvault start -r" (which updates the snapvault database and resyncs with the source). Simply renaming the volumes will not work as the SV database stores the volume ID (which changes after snapmirror) So: snapvault quiesce ... snapmirror initialize... update... snapmirror break snapvault start -r .... -Michael
... View more
IOM is the module that gets plugged into the DSx24x and DSx48x disk shelves to do I/O. Is that what you wanted to know? Otherweise you will have to be more explicit in phrasing your question -Michael
... View more
If you still have 3 free disks it's no problem (It's probably best to use RAID 4 + Spare in that case, and remember that the "current" controller still needs at least one spare!) However, you need a downtime/reboot for this procedure just do this: options disk.auto_assign off (this prevents the current controller from priv set diag disk remove_ownership <diskid> (for all 3 disks that you have leftover. Make sure that you get the right disks here!) license add <licensekey> (you should have received this when you bought the second controller) plug in the second controller and do a base setup via serial do a reboot of the first controller to enable HA takeover features make sure that you set up the partner VIFs correctly from here on (use any NetApp base setup guide as reference) (mostly the "ifconfig" lines in /etc/rc need a "partner" definition, you need to do "cf enable", and test takeover/giveback) -Michael
... View more