Did you try accessing the dfm db views feature of DFM 3.7 or latter? Create a db user and do a dfm database query run on the different views. The TR below gives details on how to access the db. http://media.netapp.com/documents/tr-3690.pdf Section 3.1 SCENARIO 1: ACCESSING DATABASE VIEWS THROUGH DATAFABRIC MANAGER’S COMMAND LINE INTERFACE (CLI).............................................................................................................................................8 Regards adai
... View more
The event tries to convey that, if you un-dedupe this volume, the volume cannot contain the data, as its smaller than the volume total size due to dedup which is The sum of volume used space and saved space (as a result of deduplication) expressed. So to avoid this, increase the volume size. Regards Adai
... View more
Hi Chris, Doing a dfpm dataset list -R will give you the lag value, using which you can identify the specific relationship.Also in the NMC clicking on the connection will list you all the relationship of the dataset and the one above the lag value in red color with icons, which should take no time in identify the relationship Regards adai
... View more
As you can see, your relationships are created using vfiler interface not filer interface. That’s the reason why you don’t see them in PM. IF you create the same relationship like this. psfona01:/vol/a01/qa01 psfona01:/vol/sv_a01/qa01 Source 11:02:54 Idle This relationship will be discovered by dfm. Also if you create using filer interface, the snapvault status output from vfiler interface will not return those relationships. And that what dfm want. Relationships created using filer interfaces for vfiler also. Regards adai
... View more
So, to import a relationship you create a dataset, apply a policy to dataset with a desired schedule in it and then run import wizard. Import doesn't delete SM or SV schedules from Filer. All import wizard does is it fills empty dataset resources. Yes. You are perfect, after import may consider disabling the SM and SV schedules from the filer. Or you can have policy without schedules in PM As having both filer schedules and policy schedules of PM will lead to taking more snapshots(one by PM schedules and other by SM/SV schedules). So you may hit the limit of 255 snapshots per volume very soon if you snapshot retention is high. >Based on this, is it worth running import wizard to external relationships or you are probably better off creating new relationships in PM? Importing is better as you would save the time of rebase line, network bandwidth, and filers resources. >Are there any other benefits of importing relationships which I am missing? No. So after import you may decide on which scheduling to go with,PM schedule or filer schedule. Don’t go with both. Remove one of them which every you don’t like. Regards adai
... View more
Does this mean I need to assign a lag warning/error threshold manually for each and every policy? Yes. What would be the impact of assigning all of the relationships to a single replication policy The replication policy are nothing but the one which specifies the following. The network to use for replication and schedule at which this relationship has to be updated. So changing it depends on your RPO and RTO and also having all of them at the same time may put load on the filers. Regards adai
... View more
PM can manage the type of relationship you are mentioning the only caveat is The relationship has to be created using the vfiler0 interface for both vfiler0 and vfiler1 and not the other way. Vfiler0 to Vfiler1 where in vfiler1 uses vfiler0 interface to snapvault. Can you get us the output of SnapVault status ? Regards adai
... View more
Is the relationship created using the vfiler0 interface for both vfiler0 and vfiler1 ? As PM discovers only those created with vfiler0 interface and not vfiler1’s interface. Regards adai
... View more
Hi DFM 4.0 supports SnapVault relationship on the same controller. DFM 3.8 does not recognize/discover SnapVault source and destination on the same controller. So import of the same is not possible. Regards adai
... View more
Chris, The snapmirror lag and error thresholds have to be changed in the dfdrm policy and not the dfm options. Here is the steps to do it. Go to Ops-Mgr WebUI Disaster Recovery Tab and select the view as Volume SnapMirror Relationsip and click on the Replication policy Column as show in the pic replication. This will open up the Edit policy page where you will find the options for lag warning and error threshold. By default they are 1.5 days and 2 days respectively. Attached are the screenshot for the same. Regards adai
... View more
Hi Below are the steps. 1. Create a dataset. 2. Add the policy with or without schedules (based on your requirement) a. Now you will have a empty dataset with policy attached 3. Go to the external relationships page (assuming that the relationships are already discovered.) 4. Select the relationships and use the import wizard to do the same. 5. After import is done. 6. If dfm version is 3.8 or later snapmirror.conf is not removed for VSM and QSM relationships. 7. For Snapvault relationship irrespective of dfm versions snpavault snapsched is not removed from the filer. Remove them using the filer cli command snapvault snap unsched The import wizard help of the NMC give details about the same. During import the secondary(destination volume) is checked for the following in order to be suitable. For Snapvaut/QSM relationships Destination volume has to 1.32X the source size. For VSM The source and destiontion volume languages are same. And the below set applies to all types of relationships. The volume must not have exceed the volume nearly or full threshold of ops-mgr. The containing aggr of the volume is not exceeded the over commitment threshold or aggr full/nearly full threshold.of ops-mgr. The volume does not exceed the inode full and nearly full threshold. Regards adai
... View more
Hi I only said, when you import the SnapMirror and SnapVault schedules(the filer ones) are not deleted. I never said schedules are imported into policy. Regards adai
... View more
Hi Scott, Yes.Make sure you have the following done. The Ossv host and the destination filer is added to the DFM with the NDMP credentials set. You should have a new or a existing dataset with Remote backup policy or backup policy without local backup schedule. Make sure the relationships are discovered in the external relationships page. Goto the external relatiship page and import the same or use the cli's, dfpm dataset import. Regards adai
... View more
Hi Pasha, There is a FAQ on DFM sending traps to third party tools like HP Open-View. Below is the link to the faq. It must be applicable to Tivoli too. http://now.netapp.com/NOW/knowledge/docs/DFM_win/rel40/html/faq/index.shtml#_7.12 Regards adai
... View more
I am in the process of importing ~40 VSM relationships into Protection Manager 3.8.1 for monitoring only. I created an empty dataset and a policy with no schedule. For some reason many of the relationships are not importing and being removed from the "External Relationships" tab. Didn't the conformance say anything ? Can you use the dfpm import cli with -D(dry run and post the output?) Is the reason for this logged somewhere? I checked the dfpm.log and didn't see anything. You may have to check in the conformance.log and dfmserver.log. I am also seeing the datasets are reporting an protection status of "Baseline Failure: All baseline transfers failed because the dataset in non-conformant". However, the dataset itself is reporting to be conformant and the transfers are working. What causes this error and how can I clear it? Its a known issue, when a external relationship is imported to a dataset, and until a scheduled or on-demand backup is run, the protection status is shown as baseline failure. To get rid of this just run one on-demand or one scheduled backup from PM. Regards adai
... View more
After import you dont have to do anything. If the SM relationship had a snapmirror.conf entry you might have to remove them if you dont want both the protection policy and the snapmirror.conf to run the schedules. Based on your needs you can disable either of them which you dont wish. After importing a relationship PM doesn't remove the snapmirror.conf entry or snapvault snap sched for a Snapmirorr or Snapvault relationships. Its up to the user to decide if he wants to go with filer scheduling or protection manger scheduling. Also if you go with filer scheudling PM backup wont be registered. Regards adai
... View more
Hi Pasha, When you add your primary to your dataset and add the policy, PM will always try to create a destination volume based on the attached Resource Pool or if a volume was assigned on the destination will try to use it, after its conformance checks. The procedure/method to import a external relationship is create an empty dataset with the policy attached and import or use an existing dataset with the policy attached. Regards adai
... View more
I am not sure what you mean by Bypassed disk, But there are the canned reports available in Ops-Mgr for disk. dfm report list | grep -i disk disks summary of all disks disks-aggr summary of disks attached to aggregates disks-broken list of all failed disks disks-spare list of all spare disks disks-500 list of all 500 GB disks disks-320 list of all 320 GB disks disks-300 list of all 300 GB disks disks-250 list of all 250 GB disks disks-144 list of all 144 GB disks disks-136 list of all 136 GB disks disks-72 list of all 72 GB disks disks-36 list of all 36 GB disks disks-18 list of all 18 GB disks disks-9 list of all 9 GB disks disks-4 list of all 4 GB disks disks-2 list of all 2 GB disks disks-performance-summary performance summary of Disks # And these are the list of fields available for Disk catalog using which you can create a Custom Report. dfm report catalog list Disk Disk Catalog Default Display Tab: PhysicalSystems Fields: Field Default Name Default Format
... View more
I have run through on whatever I knew. Check if there are any errors in the dfmmontior.log or credentials are fine for the filer. Regards adai
... View more
I have checked my policy please see bellow, but I still have older snapshots than the weekly retention count, I assume this is because my volume is not short on space and the auto delete settings do not need to delete any files, No. The deletions of expired backup is done in two stages by PM. The conformance runs on the dataset(by default every hour) and marks the expired backups for deletion which are beyond the retention count and duration. The snapshot monitor actually goes and deletes the snapshots.(by default every 30mintues) So within 1.5 hours the expired backup will be deleted. If they are not then I suspect the monitoring is stopped. Can you get the output of dfm about ? (you can sanitze the customer specific details) Regards adai
... View more
The setting is at dataset level, per dataset.It is set from the dfpm cli. Below is the cli for the same. Use the dfpm policy node get to see what is the current values. And dfpm policy node set to change any values you like dfpm policy node get Backup Node Id: 1 Node Name: Primary data Hourly Retention Count: 2 Hourly Retention Duration: 86400 Daily Retention Count: 2 Daily Retention Duration: 604800 Weekly Retention Count: 1 Weekly Retention Duration: 1209600 Monthly Retention Count: 0 Monthly Retention Duration: 0 Backup Script Path: Backup Script Run As: Failover Script Path: Failover Script Run As: Snapshot Schedule Id: 46 Snapshot Schedule Name: Sunday at midnight with daily and hourly Warning Lag Enabled: Yes Warning Lag Threshold: 129600 Error Lag Enabled: Yes Error Lag Threshold: 172800 Node Id: 2 Node Name: Backup Hourly Retention Count: 0 Hourly Retention Duration: 0 Daily Retention Count: 2 Daily Retention Duration: 1209600 Weekly Retention Count: 2 Weekly Retention Duration: 4838400 Monthly Retention Count: 1 Monthly Retention Duration: 8467200 # Regards adai
... View more
If the snapshots are PM taken and they are beyond the retention duration and count, then they are automatically deleted by PM. PS: Both duration and count has to expire and not just for snapshot to be eligible for deletion by PM. Regards adai
... View more