I'm doing exactly the same thing here and that report does contain entire volume snapvault entries from multiple filers, including lag time and status. You need to make sure you're running the report against a DFM group with all of the necessary volumes in it.
... View more
To me, those disk read and write operations and network I/O do not look particularly large to indicate that the filer is busy servicing CIFS. I also don't think you have dedupe or snapvault or ndmp running as these would show large disk and network I/O. I would agree with the earlier poster that perfstat collection would help Netapp to determine what's going on. I have seen times when deleting large files or snapshots has caused a filer to go very busy on disk I/O when nothing else is seemingly going on. Is it just write performance that is bad or is it affecting reads also? If you have one filer acting OK and the other not I would suggest looking at network stats as well. Check outputs from tools like "netdiag" and "ifstat" to see if there are any network receive or transmit errors. Mismatches in network speed/duplex or flow control can also have a very detrimental affect. Hope this helps somewhat... Richard
... View more
There are aggregate size limitations, depending upon filer model and OS version. Having separate aggregates also ensures that you can separate workloads (volumes) that demand a lot of disk I/O. I suppose if your aggregate is mirrored you should consider that deltas are likely to be larger and rebuild times would be higher, otherwise there seems to be little perceived benefit in having multiple aggregates... Richard
... View more
I'm sure you're likely to get many answers to this one... All of your disks shelves should be "multi-pathed", which means connected to more than one HBA. Presumably the disk in the loop or stack in question is connected to HBAs 2b and 3b. You will find that some disks are "assigned" to one adapter or the other. If you type "sysconfig -r" you should see that some disks in each raid group are on different adapters. Richard
... View more
"vol move" won't work between different aggregate types anyway. Even if it could you can't avoid a reboot if you relocate the root vol/aggregate. Richard
... View more
Let me see if I can add my experiences and see if they are useful to you. OSSV does not do well in a "bare metal" restore situation. You cannot easily use it to put a server exactly back to the point it was when it was backed up. However, I would recommend that you test the feasability of this before coming to your own conclusion. I have seen some articles out there (somewhere) that explain how to do a system state restore onto un-like hardware. OSSV does indeed have the equivalent of an "agent" but it is very light-weight. To do single file restores you can either access the backup volume via CIFS as you said, or via DFM/Protection Manager if you have it. If you don't have Protection Manager then managing your OSSV backups can be done but there is a lot more effort required to set it up and monitor it. Restoring from SV is pretty much the same - if the relationship is managed via Protection Manager then restores are easy. Even if the relationship is not managed via Protection Manager you can use the Operations Manager interface to perform a single file restore. You can get as granular as you like. Most of Netapp's data protection software has no inclusions for tape backup. In the case where your data is an offline media such as this you have to have some mechanism for manually restoring the backup to some temporary location and then making the data available via CIFS or NFS. In this case I don't know of any easy way to use PM to assist you in the restore - it comes down to whatever software you are using the write to tape. It is particularly difficult when working with the SM products since usually LUNs are involved and you have no choice but to restore the entire LUN from tape. Here, I'm hoping that we can move entirely to a disk-based backup because having a hybrid environment doesn't work that well. We still rely upon a tape copy for DR purposes but in that case we are going to have many more challenges involved. As far as management is concerned, each software piece seems to have some kind of quirk involved to have to work around. SME always seems to be less picky than SMSQL and SMSP. I have not used SMVI since version 1.0 so I don't know enough about that to be able to comment. Hope this helps. Feel free to contact me directly if I can offer any other insights. Cheers, Richard
... View more
How do you know how many questions DON'T ever make it to the community forums because of training and/or manuals or other research? Some places can't afford to train their staff and some users are not familiar enough with the products or the technology to know even where to start in the manual.
... View more
Adaikkappan Arumugam wrote: Also the nubmer of OSSv relationships per secondary volume is 50.But this is configurable. Sorry to dig up an old thread here but you seem to be the resident DFM expert I'm running into a DFM limit of 50 Snapvault relationships for a single secondary volume. This is Snapvault rather than OSSV but I wonder if it's the same limit. Are you aware of how to increase the limit from 50? I know it's not an ONTAP limit. I have a tech case open anyway. Thanks! Richard
... View more
Sorry to dig up an old post here, but did you ever get any replies or further information about backing up AD using OSSV? Thanks, Richard
... View more
I could be wrong but it was my understanding that the LUN type specified when creating a LUN was used to accommodate for the fact that different partition table types start the first partition at a particular offset. I think ONTAP aligns the LUN correctly according to the LUN type so that you don't need to be concerned about alignment. However, if the partition table on the LUN is not the same as it was configured/formatted for, chances are the LUN is not properly aligned on disk. The ramifications are that in the worst case the filer is having to fetch two disk blocks for a single file system block requested, thus using more IOPS than necessary. This usually manifests itself greater on file systems with random reads rather than sequential reads. Personally I would recommend that you create a new LUN with the correct type and migrate your data from the old to the new. HTH, Richard
... View more
I have found that altering these dfm options also affects the name of the qtree that is created, which may or may not be to your liking. Regarding NDMP backups of Snapvault secondary volumes, I have found that it is not necessary to specify a particular snapshot to backup because the "active file system" always contains the most recent backup anyway. You just point your backup software at the entire volume. I guess an added complication could be if a Snapvault transfer is currently in progress when your dump is running... Richard
... View more
The "usual" way to achieve this is - indeed - to Snapvault to one controller and then use Snapmirror to create a second copy. As an example, the included Protection Manager policy "Back up, then mirror" does exactly this. I'm sure it may be possible to hack it to achieve this by scripting but you would be looking for trouble, IMHO. Richard
... View more
I tried this and although deleted objects are listed in the reports, dfm does not generate any historical graphs for them. It's like either the data has been purged from the database or the functionality is not there to do this. Thanks, Richard
... View more
Anyone know a way to generate graphs or reports on deleted dfm objects? I deleted (and rebuilt) an aggregate over the weekend. I can find the deleted object in dfm (via GUI and CLI) and I can actually list out the latest statistics on all aggregates using "dfm report view -H aggregates-capacity" but there doesn't seem to be a way to pull pack any further info on a deleted object. I'm guessing that the stats are still in the database. Any thoughts? Thanks! Richard
... View more
Thanks! I knew about the "dfm" and "dfpm" but not the "dfbm" commands. I'm not too pleased that the report is in list form rather than tabulated but I can work with that. Cheers, Richard
... View more
Ops Manager 4.0.1 with Protection Manager. I would like to run a report on command line to output a list of OSSV primary hosts and paths. I thought this would be easy since you can get the same report from the web interface and you can even schedule it to run and email you the results. However, I cannot find a command that will run the report and display the results. I'm either looking in the wrong place or doing something else wrong. I can list the available reports, thus (my emphasis): # dfm report list -A backup
Report Report Type Report Application
---------------------------------------- ------------ --------------------
summary:summary built_in Backup
summary:summary-completed built_in Backup
summary:summary-inprogress built_in Backup
summary:summary-failed built_in Backup
summary:summary-no-status built_in Backup
backup:backups-by-primary built_in Backup
backup:backups-by-secondary built_in Backup
restore:primary-dirs built_in Backup
primary-directories:primary-dirs built_in Backup
pridir-nb:primary-dirs-discovered built_in Backup
pridir-qtrees-nb:primary-dirs-qtrees-discovered built_in Backup
secondary-volumes:secondary-volumes built_in Backup
ndmp-ping:unavailable-agents built_in Backup
ndmp-unauth:unauthenticated-systems built_in Backup
primary-hosts:primary-hosts built_in Backup
primary-hosts:primary-hosts-storage-systems built_in Backup
primary-hosts:primary-hosts-open-system built_in Backup
secondary-hosts:secondary-hosts built_in Backup
schedules built_in Backup
jobs:jobs-1d built_in Backup
jobs:jobs-7d built_in Backup
jobs:jobs-30d built_in Backup
jobs:jobs built_in Backup
jobs:jobs-completed built_in Backup
jobs:jobs-running built_in Backup
jobs:jobs-failed built_in Backup
jobs:jobs-aborting built_in Backup
jobs:jobs-aborted built_in Backup
events:events built_in Backup
events:events-warning built_in Backup
events:events-error built_in Backup
events:events-unack built_in Backup
but: # dfm report view primary-hosts:primary-hosts-open-system There is no primary-hosts:primary-hosts-open-system report. # dfm report view -A backup primary-hosts:primary-hosts-open-system There is no primary-hosts:primary-hosts-open-system report. I could use a combination of "dfm report schedule run" and "dfm report output list" and "dfm report output view" but that seems like a lot of work for something that should be easy. Anyone? Thanks, Richard
... View more
I don't think there is a definitive answer on this. It's going to depend upon the number of hosts you have, VMs that are running and their storage I/O demands. In our medium-sized environment here we have gigabit on the hosts and 10GbE on the Netapp side. Beware that even if you use port trunking your maximum throughput will be limited to the speed of any single port because of the way port-channeling works. Richard
... View more
Yes, each controller has to have its own root volume and that has to be on an aggregate owned by the controller. So.. each controller is assigned 12 disks out of the 24 available. What do you want to achieve?
... View more
I don't think changing the log file rotation is possible. You don't need to make it a manual process to copy the messages file every week. Just script it on some client system (either over CIFS or NFS) to copy and rename the file and place it somewhere else, even back onto the filer somewhere. You do have a systems admin capable of writing and scheduling trivial scripts like this, don't you? Richard
... View more
May need some more info here. In a cluster you have two controllers and each can "own" or be assigned exclusive access to any disks that it can see. In your screenshot, your 12 "partner" disks are assigned to the other controller, that is why you cannot build an aggregate out of them on the controller you're viewing them from. A Netapp cluster is usually an active/active pair where each controller has its own storage, as well as the ability to access the other controller's storage in the event of a failover. Does your second controller have any aggregates? Un-owned disks can be assigned to either controller and you can unassigned disks from a controller, although this is considered an "advanced" operation. Richard
... View more
The root option gives the root user on an NFS client full privileges on the export. Otherwise, the root user, as you said effectively gets mapped to a UID corresponding to "nobody", a user that has no special privileges. You may use it if you want to prevent someone who has root access on a client system from making changes on the filesystem. To be honest, it is not widely used these days. Richard
... View more