From what I've been able to gather... daily (1d) stats: average every 15 minutes weekly (1w) stats: average every 2 hours monthly (1m) stats: average every 8 hours quarterly (3m) stats: average every 1 day yearly (1y) stats: average every 4 days This is probably for CPU utilization and other types of reports as well. Cheers, Richard
... View more
We've come across the bug also, with OM 4.0 on Linux. Don't see the option "hostEnableSNMPBasedQtreeMonitoring" available: # dfm option set hostEnableSNMPBasedQtreeMonitoring=Yes Error: hostEnableSNMPBasedQtreeMonitoring is not a valid option.
... View more
Some SNMP traps carry more information than others, I couldn't say one way or the other if the quota traps contain enough information. Before we had DFM we were intending on using syslog to trap quota messages, until we realized that the syslog message provides a Windows SID rather than a username and no qtree name. You can always try to decrease the monitoring interval for quota checks and see what happens. Every environment is different so it may be that your systems can tolerate such a load without adverselt affecting anything. Note that DFM also maintains its own database of user quota information so that information has to be updated also each time a check is run. I doubt very much that the storage controller itself will ever have the functionality to do what you want. The more that you give it to do that isn't directly related to serving NAS/SAN stuff the less effective it is going to be at those tasks. Cheers, Richard
... View more
Here is my recommendation. Do not use the built-in quota notification system in DFM to generate near real-time alerts of users getting near to or going over quota. It is designed to run on a periodic basis because it asks your storage controllers for a quota report. This generates a lot of work for your DFM server and your storage controller, depending upon the complexity and scale of your systems. You MAY get away with bringing your quota monitoring interval down to an hour or so but I would still be wary of what additional load this generates. As others have said you really should look into SNMP traps. Read the section in the DFM admin guide about these. It describes the pros and cons of changing monitoring intervals and suggests that SNMP traps - event notifications from the storage controller to an SNMP trap listender regarding a specific event - are better for near or real time monitoring. It's the difference between having your kids ask you every minute "are we there yet" or you telling them when you arrive. I have not looked into the details but I'm quite sure the storage controller will send an SNMP trap when a user quota limit is hit. You (or someone) will have to examine the Netapp SNMP MIB to see what the specific event is so that you can watch for it. What you can do then is to configure an alarm in DFM that triggers when the trap is received. You then have the alarm call a script that sends email to the user as required. This approach isn't simple: it requires knowledge of SNMP and scripting. As someone else pointed out, it would be nice if DFM could natively handle SNMP quota messages and send alerts to users in the same way that quota monitoring already works. I haven't checked - do Netapp have a mechanism for submitting RFEs? Cheers, Richard
... View more
I didn't read the linked articles but we're doing quota notifications with DFM. See the "dfm quota mailformat" command - it allows you to specify an email template file, and in that file you can specify your own mail headers. For example, at the top of the file I have: From: <abc@xyz.com> Subject: Your personal drive is nearing its quota limit email content..... You place the desired sender's email address in the From: field.
... View more
As far as I know (and I could be wrong), df in Unix has never reflected user quotas, it always shows the size of the file system. This is the same whether the file system is local or over NFS. A qtree quota is different because it makes the OS think that the "disk" has a restricted size. The "quota" command, however, should display the quota size and usage for a given user. Cheers, Richard
... View more
The following are the autodelete settings right now. I assume that the settings are retained even though I disabled it. state : off commitment : try trigger : volume target_free_space : 10% delete_order : oldest_first defer_delete : user_created prefix : (not specified) destroy_list : none
... View more
FC disk drives though eventually will be all replaced with SAS drives. So for me the more intriguing question is: when will we see SAS-to-FC (or SAS-to-FCoE) bridge to enable DS4243 shelf support in a MetroCluster setup? Good question. I would also like to know what Netapp's roadmap is for MetroCluster (especially fabric) using the newer shelf technology. Richard
... View more
Hi Brendan, I make use of NDMP for backup a lot. I don't think I can answer all of your questions but hopefully I can give some insights. We use Symantec NetBackup (6.5) for our tape operations. We have used EMC Networker in the past but not with NDMP backups. With NetBackup you can stream NDMP through the media servers to tape, rather than directly. I don't see why this would be different with Networker but you should check on this. I'm not too familiar with the technical differences between NetBackup and BackupExec. I'm quite sure that all of these backup programs will backup to disk rather than to tape - with the correct set of licensed features of course. You can do incremental NDMP dumps but we rely upon on-disk snapshots instead. Most of our stuff gets to tape only once a week and that's a full backup each time. With NetBackup you cannot simultaneously stream an NDMP backup to more than one destination ("in-line copy"). We make use of NetBackup's duplication features to make second copies, however, and I believe you can use this method to make a tape copy from a disk copy. I think you'll find that none of these solutions will give you any kind of deduplication benefit on tape. Because tape is linear rather than random access there is no equivalent to block level duplication. I believe there was one backup vendor who was advertising some kind of tape level deduplication mechanism but I can't remember who that was or how it was achieved. Hope this helps some. Cheers, Richard
... View more
Nothing is really preventing me from using it. I set up a substantial amount of Snapvault relationships before we even had it, so I was comfortable with doing that on the command line. I was never really successful in pointing two primary qtrees into a single secondary volume in DFPM. It always seems to want me to create a separate volume on the secondary for each on the primary. That could just be my misunderstanding though. I also dislike the snapshot naming conventions that DFPM uses. For OSSV management it works for us quite well. Cheers, Richard
... View more
Gotcha. Interested to hear about EV as it's something we're considering as an archive solution. Right now we don't have the space to Snapvault our 12TB or so CIFS files, especially with the one year retention that's required. I'm not certain that we could dispense with backups (either Snapvault or to tape) entirely. Would you consider EV to be a backup solution as well as for archiving? We still rely upon offsite tape copies for DR purposes. I would imagine we'd still need to have up to date copies of our EV offsite also. Any ideas on how to feasibly achieve this? Cheers, Richard
... View more
Hey, I agree with much of what you're thinking. On paper the license costs for some of the ONTAP features seem very high - others as well as snapvault. It can make it quite difficult to justify expenditure and ROI. I raised the question last year to Netapp about the possibility of at least including a monthly snapshot schedule on the Snapvault secondary. I was told that this was a limitation in the core feature set of ONTAP, which could only support hourly, daily and weekly. I since set up a small script to take snapshots on various volumes on the first Sunday of each month. I actually take "snapvault snapshots" as opposed to regular ones. Not entirely sure what the difference is but I can do "snapvault snap create volname sv_monthly" because I have a "snapvault snap sched" of "create volname sv_monthly 6@-" to retain 6 snapshots prefixed "sv_monthly". Personally I'm not a big fan of Protection Manager (DFPM) and before we had to have it (for OSSV management and for Snapdrive/Snapmanager tie-in) I could have happily done without it. I still have many Snapvault relationships that were set up and are maintained outside of DFPM. I'm a storage admin and more than comfortable with command line operations and scripting but I can see the benefit of these tools for admins without this level of confidence. I do like the way that Operations Manager reports on failed backups and Snapvault relationship lag times though. The positives, for me, regarding Snapvault: * I can create an incremental backup of a volume with 12 million files in 3 hours - it took over 4 days to go to tape * I can create an incremental LUN backup of an NTFS filesystem containing 3 million files in just a few minutes - it took over 12 hours to go to tape * I can use it in conjunction with OSSV (and Protection Manager) to backup hosts in a fraction of the time * Unlike the standard snapshot feature I can schedule exactly when the updates take place and how many of each type to retain * I can present the backup data back to the original host (or any other) The incremental nature of transfers is the big win, obviously it's what snapmirror does also. Not sure if any of that helps or makes sense! Richard
... View more
Sounds about right to me. In what way do you consider this not to be a viable solution? A 90-day backup period is non-standard unless you use Protection Manager, but it can also be scripted fairly easily. You can present the secondary data to a host for reading or you can do a "snapvault restore" operation to put the entire qtree back on the primary. Richard
... View more
I was just wondering if anyone else had seen this and can possibly offer an explanation. I have a 4TB Snapvault secondary volume that had approximately 5 months of backup data inside, maybe about 15 snapshots total. I had snap autodelete enabled on this volume, with the default values I think, and the target free space set at 10%. It was my understanding that once the volume reached 98% capacity ONTAP would start removing the oldest snapshots until the free space was back above 10%. Unfortunately, last night, autodelete got carried away and removed all but the most recent snapshot. Free space is now at 55%. I've since switched off autodelete and I will have to go back to manually deleting snapshots to free up space. Snap reserve is set at 0%. ASIS is enabled. I'm quite sure that it wasn't necessary to delete all of the snapshots just to get above 10% free. I'm not sure if I'm hitting an obscure bug or a feature! ONTAP 7.3.2 on a FAS3140. Cheers, Richard
... View more
I would tend to agree with this but the big kicker for us is that we use MetroCluster and that is currently only supported with FCAL drives. It's also more than just what kind of storage your files are sitting on. There are issues about not constantly backing up the same files over and over again and also compliance.
... View more
Got OSSV working great on a couple of Windows and Linux clients. However, I can see a management nightmare looming if configuration on CLI only were to continue en masse (100+ clients). For various reasons I've steered away from using Protection Manager for setting up regular Snapvault relationships. The limited number of relationships and their nature make me more comfortable managing these on CLI. So, I'm not very familar with Protection Manager. I managed to use it to set up a test OSSV relationship, manually specifying a secondary backup volume because we don't have Provisioning Manager. However, when it came to setting up a second client for OSSV it seems I cannot specify the same secondary backup volume because it's already in use by the first OSSV client. I'm trying to determine if the missing component - Provisioning Manager - is preventing me from using the same secondary volume to back up multiple clients. I can only select a secondary volume, not even a qtree. Is Protection Manager crippled for use with OSSV without Provisioning Manager? We don't really have a use for Provisioning Manager otherwise. Cheers, Richard
... View more
I can't speak for SnapMirror, but since I started this thread some months ago we're now using Snapvault on our VMware volumes. I still use VIBE to take consistent snapshots on the primary and now the backup to tape on the secondary just uses NDMP to pull the volume to tape. This works because contents of the secondary volume are always consistent. You can't get a consistent NDMP dump from the primary since that uses a non-consistent snapshot of the live date. Richard
... View more
I've just gone through some of this with NetApp support. Are the primary paths that are not showing up in DFM non-qtrees? If they are, the snapvault path needs to be /vol/volname/- in order for DFM to recognize them. It's kind of difficult to describe here, but you can correct this on the secondary by running: snapvault modify -S <pri_filer>:/vol/volname/- /vol/volname/qtree then snapvault start -r /vol/volname/qtree You'll then need to remove the old snapvault relationships on the primary (the ones without the /-) snapvault release /vol/volname <sec_filer>:/vol/volname/qtree Hope this helps some. Richard
... View more
For anyone remotely interested or still following this thread, I have managed to get a SM2T restore working. It appeared as though I didn't make the (restricted) restore volume big enough. The volume originally had ASIS switched on and I had made the target volume big enough to hold the contents but it seems that it needs to be at least as big as the size of the volume, not its contents. Can't be sure, but I made sure to make the target volume at least as large as the original and the restore works! Cheers, Richard
... View more
Hello guys, I hate to have to drag an old(ish) thread back into play, but I haven't been able to locate anything else. We're backing up a volume with around 12 million files using SM2T and it works fine - what would have taken 4 days is now taking 5 hours. We can live without being able to do granular restores, since this is for an offsite DR copy only. Has anyone tried to a restore from a SM2T image? The documentation just says "perform a normal NDMP restore from NetBackup". Well, I tried selecting /vol/volname to restore and NetBackup says "NDMP restore failed from path /vol/volname". I've tried specifying an alternate restore path but every combination I've tried gives the same result. I am trying to restore to a filer different from the original. I will go ahead and open a case with NetApp if nobody has any ideas... Cheers, Richard
... View more
If you want to do nightly Snapvault sessions, work out how much your data changes from one night to the next (snap delta, like you said). Once you have an estimate of the amount of changing data you can work out how long it would take to transmit that data from one filer to another over your (presumably) WAN link. This should be a MINIMUM amount of time and is subject to how much your data changes typically. You then need to decide if that amount of time is short enough. If it takes over 24 hours, obviously no. You also need to take into account if you can run Snapvault traffic across your WAN link for the duration of the transfer and not impact anything else. Cheers, Richard
... View more
I really doubt it and it's highly likely it won't ever be. I think it's probably too niche to get Snapdrive support. You may be fine with just letting the filer do the snapshots of the volumes with the GFS LUNs. We moved away from GFS in favor of NFS. Richard
... View more