Overall status is an OR of resource, protection,conformance and space. Since the dataset is not protected, it couldn't be because of conformance or protection its should either be because of resource or space status.Below is the same I created Regards adai
... View more
Hi Niels, I tried this. Created a Dataset with Backup policy. C:\>dfpm dataset list -x largeQtree Id: 362 Name: largeQtree Protection Policy: Back up Application Policy: Description: Owner: Contact: Volume Qtree Name Prefix: Snapshot Name Format: Primary Volume Name Format: Secondary Volume Name Format: Secondary Qtree Name Format: DR Capable: No Requires Non Disruptive Restore: No Node details: Node Name: Primary data Resource Pools: priRp Provisioning Policy: thinProvNas Time Zone: DR Capable: No vFiler: Node Name: Backup Resource Pools: secRp Provisioning Policy: Time Zone: DR Capable: No vFiler: C:\>dfpm dataset list -m largeQtree Id Node Name Dataset Id Dataset Name Member Type Name ---------- -------------------- ---------- -------------------- -------------------------------------------------- ------------------------------------------------------- 363 Primary data 362 largeQtree volume fas-sim-1:/largeQtree 371 Backup 362 largeQtree volume fas-sim-2:/largeQtree_backup_fasxsimx1_largeQtree C:\>dfpm dataset list -R largeQtree Id Name Protection Policy Provisioning Policy Relationship Id State Status Hours Source Destination ---------- --------------------------- --------------------------- ------------------- --------------- ------------ ------- ----- ---------------------------- ---------------------------- 362 largeQtree Back up thinProvNas 375 snapvaulted idle 0.1 fas-sim-1:/largeQtree/two fas-sim-2:/largeQtree_backup_fasxsimx1_largeQtree/two 362 largeQtree Back up thinProvNas 377 snapvaulted idle 0.1 fas-sim-1:/largeQtree/four fas-sim-2:/largeQtree_backup_fasxsimx1_largeQtree/four 362 largeQtree Back up thinProvNas 379 snapvaulted idle 0.1 fas-sim-1:/largeQtree/one fas-sim-2:/largeQtree_backup_fasxsimx1_largeQtree/one 362 largeQtree Back up thinProvNas 381 snapvaulted idle 0.1 fas-sim-1:/largeQtree/three fas-sim-2:/largeQtree_backup_fasxsimx1_largeQtree/three 362 largeQtree Back up thinProvNas 383 snapvaulted idle 0.1 fas-sim-1:/largeQtree/- fas-sim-2:/largeQtree_backup_fasxsimx1_largeQtree/largeQtree_fas-sim-1_largeQtree C:\> snaplist on source volume after baseline. fas-sim-1> snap list largeQtree Volume largeQtree working... %/used %/total date name ---------- ---------- ------------ -------- 20% (20%) 0% ( 0%) Apr 19 05:45 fas-sim-2(0099931872)_largeQtree_backup_fasxsimx1_largeQtree_largeQtree_fas-sim-1_largeQtree-src.0 (snapvault) 35% (22%) 0% ( 0%) Apr 19 05:45 fas-sim-2(0099931872)_largeQtree_backup_fasxsimx1_largeQtree_three-src.0 (snapvault) 45% (22%) 0% ( 0%) Apr 19 05:44 fas-sim-2(0099931872)_largeQtree_backup_fasxsimx1_largeQtree_four-src.0 (snapvault) 52% (22%) 0% ( 0%) Apr 19 05:44 fas-sim-2(0099931872)_largeQtree_backup_fasxsimx1_largeQtree_one-src.0 (snapvault) 58% (20%) 0% ( 0%) Apr 19 05:44 fas-sim-2(0099931872)_largeQtree_backup_fasxsimx1_largeQtree_two-src.0 (snapvault) fas-sim-1> snaplist on the destination volume. fas-sim-2> snap list largeQtree_backup_fasxsimx1_largeQtree Volume largeQtree_backup_fasxsimx1_largeQtree working... %/used %/total date name ---------- ---------- ------------ -------- 19% (19%) 0% ( 0%) Apr 19 05:46 fas-sim-2(0099931872)_largeQtree_backup_fasxsimx1_largeQtree-base.2 (busy,snapvault) fas-sim-2> Snapvault status. fas-sim-2> snapvault status Snapvault is ON. Source Destination State Lag Status fas-sim-1:/vol/largeQtree/four fas-sim-2:/vol/largeQtree_backup_fasxsimx1_largeQtree/four Snapvaulted 00:03:15 Idle fas-sim-1:/vol/largeQtree/- fas-sim-2:/vol/largeQtree_backup_fasxsimx1_largeQtree/largeQtree_fas-sim-1_largeQtree Snapvaulted 00:02:23 Idle fas-sim-1:/vol/largeQtree/one fas-sim-2:/vol/largeQtree_backup_fasxsimx1_largeQtree/one Snapvaulted 00:03:15 Idle fas-sim-1:/vol/largeQtree/three fas-sim-2:/vol/largeQtree_backup_fasxsimx1_largeQtree/three Snapvaulted 00:03:14 Idle fas-sim-1:/vol/largeQtree/two fas-sim-2:/vol/largeQtree_backup_fasxsimx1_largeQtree/two Snapvaulted 00:03:16 Idle fas-sim-2> Now I did a protect now. fas-sim-1> snap list largeQtree Volume largeQtree working... %/used %/total date name ---------- ---------- ------------ -------- 26% (26%) 0% ( 0%) Apr 19 05:58 dfpm_base(largeQtree.362)conn1.0 (snapvault,acs) 38% (20%) 0% ( 0%) Apr 19 05:57 2012-04-20_0022+0530_daily_largeQtree_fas-sim-1_largeQtree_.-.four.one.three.two fas-sim-1> fas-sim-2> snap list largeQtree_backup_fasxsimx1_largeQtree Volume largeQtree_backup_fasxsimx1_largeQtree working... %/used %/total date name ---------- ---------- ------------ -------- 18% (18%) 0% ( 0%) Apr 19 05:59 fas-sim-2(0099931872)_largeQtree_backup_fasxsimx1_largeQtree-base.0 (busy,snapvault) fas-sim-2> So I am trying to understand what you are doing ? are you using QSM instead of SV only in that case each qtree requires 1 base snapshot. Regards adai
... View more
Hi Yaron, Based on my experience. 1) What is more commonly used when defining the SM relationship: host IPs or Host names? Host Name as IP keeps changing. 2) When using Hostnames - Do you use fully qualified names or short hostnames? Short names. 3) How do you resolve the host address - DNS or host files? DNS And another question, less related to our survey here 🙂 4) When SM is working with a dedicated interface - Is it mandatory to add "multi" notation to the snapmirror.conf? It can even be failover too.
... View more
Hi Todd, Happy to help you.Still I strongly feel this as a bug, as Pete said the conformance should have warned you about the downstream relationship creation when you finished importing the leg A>B. I am filing the bug for the same. Can you add your case to bug594423 ? Also the same will happen in case of a fan out topology as well. i.e A >B and A> C and its little tricky as we cant do the same here like the A>B>C by importing B>C first and A>B later. The only way I can think off is by suspending the dataset and importing either of A>B or A>C and once completed resume the dataset. Regards adai
... View more
Hi First let me give the reason why you faced this. In your case you have a cascade topology of A>B>C. And you tried to import the relationship form A>B. PM created the relationship from B>C, following are the reasons. This is the way PM's conformance engine is designed to do this. It finds that as per the dataset for any member in the primary of the dataset it needs to have a relationship in the node B and C.So when you import the leg from A>B conformance kicks in when you finish the import wizard/cli, it finds that there is no relationship from B>C and go heads and creates it if you have resource pool or a volume attached to the node C. I agree that PM should have warned about this so that it can be avoided, if you import the leg from B>C first then PM conformance would have not done this, because there is no member on the primary to check if on the secondary (B) and tertiary (C) relationship needs to be create Regards adai
... View more
Hi Reid, The reason why one is not able to import an existing external SV relationship into a VMware dataset is because, in a VM dataset only virtual object( like VM, datastore or datacenter) can be added to the primary of the dataset. When you try to import as a external relationship which are basically volume or qtree there is no VM to storage mapping which give what vm or datastore is hosted on this volume. Also as you said, VMdataset are app consistent which does named snapshot transfer, but thats not the reason for not supporting import of a relationship. The named snapshot are created by HS by taking vmware snapshot followed by netapp snapshots and registering the same with PM. Which can very well be done with named snapshot as well. Its basically vm to storage mapping and QA qualification of the same that makes it unsupported. Regards adai
... View more
Hi In fact you can even add an entire filer, its called indirect referencing.Though at the end of the day the relationship are created at the qtree or volume level depending upon the replication technology. When an entire filer is added to the primary of a dataset, PM knows what are all the volume and its containing qtrees in the filer. Once you commit your dataset PM kick off creating relationship for each of them as per the technology( VSM/QSM/SV). PM takes its data from the dfm db, which discovers for new volumes or qtrees once every 15minutes by default. When conformance run on the dataset once every 1 hour by default checks for primary members like its qtree and volumes( irrespective of what is the direct member of the dataset, like volume/aggr/filer) and check its secondary to see if there is a corresponding relationship if not it kicks off create relationship jobs. This is the one of the sole job of conformance engine. Regards adai
... View more
Hi Niels, Yes you can do it. For each qtree on the primary snapvault creates a base snapshot, on first update all off them are coalesced into one base snapshot. Below is a example. fas-sim-1> qtree status OneThousand Volume Tree Style Oplocks Status -------- -------- ----- -------- --------- OneThousand unix enabled normal OneThousand one unix enabled normal OneThousand three unix enabled normal OneThousand two unix enabled normal fas-sim-1> After SnapVault Start/Create Relationship job fas-sim-1> snap list OneThousand Volume OneThousand working... %/used %/total date name ---------- ---------- ------------ -------- 21% (21%) 0% ( 0%) Apr 15 11:31 fas-sim-2(0099931872)_OneThousand_backup_one-src.0 (snapvault) 36% (23%) 0% ( 0%) Apr 15 11:31 fas-sim-2(0099931872)_OneThousand_backup_OneThousand_fas-sim-1_OneThousand-src.0 (snapvault) 46% (23%) 0% ( 0%) Apr 15 11:31 fas-sim-2(0099931872)_OneThousand_backup_two-src.0 (snapvault) 53% (21%) 0% ( 0%) Apr 15 11:31 fas-sim-2(0099931872)_OneThousand_backup_three-src.0 (snapvault) fas-sim-1> After SnapVault Update/Protect Now fas-sim-1> snap list OneThousand Volume OneThousand working... %/used %/total date name ---------- ---------- ------------ -------- 27% (27%) 0% ( 0%) Apr 15 11:39 dfpm_base(OneThousand.436)conn1.0 (snapvault,acs)<<<<<<<<<<<<<<<<<<<<<<<<<SV Base Snapshot with dataset name & id 39% (21%) 0% ( 0%) Apr 15 11:38 2012-04-16 12:40:54 daily_fas-sim-1_OneThousand.-.one.three.two<<<<<<<<<<<<<<<<Backup snapshot created by Protect Now. fas-sim-1> As the max snapshot per volume is 255, after creating 255 qtree snapvault relationships the dataset will become non-conformant with error saying no snapshot available. Now run a Protect Now from Protection Manager, all this 255 will be coalesce into one. But still the dataset will show the conformance status as non-conformant.Click on the same and say conform now. PM will now create relation for next 253 qtree ( as one is already used by dfpm_base and other by the backup snapshot of PM).Once this is done, again it will fail due to non availability of snapshot. Run Protect now. Keep doing the same until all 1000 qtrees are snapvaulted. The down side is that, max concurrent SV stream per controller is limited and various with the following. ONTAP Version FAS Model NearStore License being enabled or not. The regular scheduled updates of this volume, will consume all SV threads until its finished and can increase the back window and delay snapshot creation on the secondary as alll 1000 needs to be snapvaulted before a SV snapshot can be created on the destination. This is the only downside I can think of. This limit for 50 was done mainly for QSM as each qtree in a QSM needs a base snapshot and only remaining 205 would be available for long term retention as max snapshots per volume is only 255. Also do remember the options you are changing is a global option and applies to all dataset creating SV relationship. Regards adai Regards adai
... View more
Hi Erik, Since you said you have 6CPU, can you check if you are hitting the vmware problem ? Determining if multiple virtual CPUs are causing performance issues Regards adai
... View more
Hi Eric, As per the public report this happens due to ldap and upgrade to OC 5.0. BTW is the windows dfm server configured for ldap ? http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=533814 Regards adai
... View more
Hi Michael, I smell a bug here. Will confirm and get back to you. Mean time can you open a case with NetApp global support ? Regards adai
... View more
Hi Real, DFM currently has support only for the RLM cards. The support for SP is being added as part of OnCommand 5.0.1, which is around the corner for release. As far as I know and remember DFM does not support BMC, will confirm and update this post if its otherwise. Regards adai
... View more
This available as out of the box thing in OnCommand/DFM, which is available along with the netapp controller you buy. C:\>dfm eventtype list | findstr /i snapshot aggregate-snapshot-reserve-almost-full Warning aggregate.snapshot aggregate-snapshot-reserve-full Warning aggregate.snapshot aggregate-snapshot-reserve-ok Normal aggregate.snapshot lun-snapshot-not-possible Warning lun.snapshot lun-snapshot-possible Normal lun.snapshot snapshot-full Warning df.snapshot.kbytes snapshot-space-ok Normal df.snapshot.kbytes snapshot:created Normal snapshot snapshot:failed Error snapshot snapshots:disabled Information snap-status snapshots:enabled Normal snap-status snapshots:not-too-old Normal old-snaps snapshots:too-old Warning old-snaps volume-first-snapshot-ok Normal volume.first-snap volume-nearly-no-first-snapshot Warning volume.first-snap volume-new-snapshot Normal snapshot.discovered volume-next-snapshot-not-possible Warning volume.next-snapshot volume-next-snapshot-possible Normal volume.next-snapshot volume-no-first-snapshot Warning volume.first-snap volume-snapshot-deleted Normal snapshot.deleted volume-snapshots-auto-deleted Information snapshot.autodelete C:\> Regards adai
... View more
Hi Scott, Do all your ossv host have a NHA (NetApp Host Agent ) as well ? If so you are hitting the issue you seem to be a victim of bug 556462. The public report of the same is available below and has a workaround. Has this work around been done in your case ? If not can you tell us you observations after doing the specified workaround ? http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=556462 Regards adai
... View more
One way to achieve this is by having the throttle set to zero during other times than the scheduled backup time. Below is the conformance error message that I got. Conformance Results === SEVERITY === Error: Attention: No available bandwidth to start a backup relationship for Qtree : fas-sim-1:/TestDs/- === ACTION === Creating new backup relationship === REASON === Current throttle value is 0. === SUGGESTION === Change the throttle setting or retry the task at different time. === SEVERITY === Error: Attention: No available bandwidth to start a backup relationship for Qtree : fas-sim-1:/TestDs/later === ACTION === Creating new backup relationship === REASON === Current throttle value is 0. === SUGGESTION === Change the throttle setting or retry the task at different time. The conformance monitor runs once every 1 hour will try to conform but will fail, but once the bandwidth becomes availabe the baseline/creating relationship will happens. Below is a pic of how my throttle looks like Regards adai
... View more
Hi Craig and Rahul, If you have time read this doc on how history data is maintained in DFM. For each database table, the Operations Manager server saves sample values for periods of the following duration: • Each daily history sample covers 15 minutes. • Each weekly history sample covers 2 hours. • Each monthly history sample covers 8 hours. • Each quarterly history sample covers 1 day. • Each yearly history sample covers 4 days. Purging of Older Samples from History Tables.To keep the database size under control, samples from each of the history tables are purged when they get old. A maximum of 150 samples are kept in each sample history table for one storage object, which translates into: • 37.5 hours in daily sample table • 12.5 days in weekly sample table • 50 days in monthly sample table • 5 months in quarterly sample table. • Samples in yearly sample table are never purged. Operations Manager UI does not provide graphs that span longer than a year; “dfm graph” cli can be used to get older data from the yearly sample table. Below is the link to the doc. Storage Capacity Management using OnCommand Operations Manager Regards adai
... View more