Active IQ Unified Manager Discussions
Active IQ Unified Manager Discussions
With Prot Mgr 3.8, the name for the local snapshot copies includes a timestamp. Having a timestamp in the name for the snapshot copies is kind of unwielding. Plus, it is kind of redundant as there is already a "date" column in the output of "snap list". Is there an alternative to the timestamp? In which case, is there a way to remove the timestamp from the snapshot name?
--Mike O
Solved! See The Solution
Hello Mike,
There is no way in which the time stamp can be removed from the snapshot name.
When you say "Is there an alternative to the timestamp? " what exactly are you looking at?
- Akshay
Hello Mike,
There is no way in which the time stamp can be removed from the snapshot name.
When you say "Is there an alternative to the timestamp? " what exactly are you looking at?
- Akshay
It appears that the timestamp is used to distinguish the different snapshots:
0% ( 0%) 0% ( 0%) Jul 08 10:00 2009-07-08 15:00:06 hourly_psefiler1_HR_mikeo
0% ( 0%) 0% ( 0%) Jul 08 09:00 2009-07-08 14:00:11 hourly_psefiler1_HR_mikeo
0% ( 0%) 0% ( 0%) Jul 07 16:59 2009-07-07 22:00:06 hourly_psefiler1_HR_mikeo
0% ( 0%) 0% ( 0%) Jul 07 15:59 2009-07-07 21:00:07 hourly_psefiler1_HR_mikeo
0% ( 0%) 0% ( 0%) Jul 07 14:59 2009-07-07 20:00:05 hourly_psefiler1_HR_mikeo
Instead of using the timestamp, use the traditional method of sequencing snapshots:
0% ( 0%) 0% ( 0%) Jul 08 10:00 hourly_psefiler1_HR_mikeo.0
0% ( 0%) 0% ( 0%) Jul 08 09:00 hourly_psefiler1_HR_mikeo.1
0% ( 0%) 0% ( 0%) Jul 07 16:59 hourly_psefiler1_HR_mikeo.2
0% ( 0%) 0% ( 0%) Jul 07 15:59 hourly_psefiler1_HR_mikeo.3
0% ( 0%) 0% ( 0%) Jul 07 14:59 hourly_psefiler1_HR_mikeo.4
--Mike O
Hi Mike,
The timestamp comes handy when the source and destination filer are across timezones and you wish to schedule
your dataset backup in a particular timezone.
For example my dataset is in GMT but my filers are in IST so my dataset backup schedules are run on
GMT.
C:\>dfpm dataset list -x 5221
Id: 5221
Name: Cust
Policy: Back up, then mirror
Description: Testing for burt+cust
Owner: Adaikkappan
Contact: adaikkap@netapp.com
Volume Qtree Name Prefix: Timestamp
DR Capable: No
Requires Non Disruptive Restore: No
Node details:
Node Name: Primary data
Resource Pools: Primary_RP
Provisioning Policy: pri_pol
Time Zone: GMT
DR Capable: No
vFiler:
Export Protocol: mixed
NFS Protocol Version: v3
Disable setuid: 1
Anonymous Access UID: 0
Read-only Hosts: None
Read-write Hosts: All
Root-access Hosts: None
Security Flavors: sys
CIFS Domain: BTCLAB
CIFS Share Permissions: Everyone:full_control
Node Name: Backup
Resource Pools: Backup_RP
Provisioning Policy: sec_pol
Time Zone:
DR Capable: No
vFiler:
Node Name: Mirror
Resource Pools: Mrr_RP
Provisioning Policy:
Time Zone:
DR Capable: No
vFiler:
C:\>
lnx186-149:/ # snap list Timestamp
Volume Timestamp
working...
%/used %/total date name
---------- ---------- ------------ --------
13% (13%) 0% ( 0%) Jul 08 20:56 dfpm_base(Cust.5221)conn1.0 (snapvault,acs)
28% (19%) 0% ( 0%) Jul 08 20:56 2009-07-08 15:28:54 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< timestamp
42% (25%) 0% ( 0%) Jul 08 20:51 dfpm_base(Cust.5221)conn1.1 (snapvault)
49% (19%) 0% ( 0%) Jul 08 20:51 2009-07-08 15:23:59 weekly_f3050-184-38_Timestamp.-.qt_cust<<<<<<<<<<<<<<<<<<<<<<<<times are different.
71% (60%) 0% ( 0%) Jul 08 20:40 dfpm_base(Cust.5221)conn1.2
72% (12%) 0% ( 0%) Jul 08 20:40 2009-07-08 15:13:06 monthly_f3050-184-38_Timestamp.-.qt_cust
lnx186-149:/ #
If you see in the above output the weekly snapshot timestamp is different from the date output of the file.
49% (19%) 0% ( 0%) Jul 08 20:51 2009-07-08 15:23:59 weekly_f3050-184-38_Timestamp.-.qt_cust
But unfortunately the timestamps cant be turned off like the other options.
28% (19%) 0% ( 0%) Jul 08 20:56 2009-07-08 15:28:54
The snaphost and volume naming can be controlled by the below options.
C:\>dfm options list | grep -i pmcus
pmCustomNameUseHostName No
pmCustomNameUsePrefix Yes
pmCustomNameUseQtreeList No
pmCustomNameUseRetentionType No
pmCustomNameUseType No
pmCustomNameUseVolumeName No
C:\>
In this case the snaphost will only have the timestamp( ie it cant be turned off)
Regards
adai
My point was if you want to know when the snapshot was created you can simply look at the "date" column in the output from "snap list":
%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Jul 08 10:00 2009-07-08 15:00:06 hourly_psefiler1_HR_mikeo
0% ( 0%) 0% ( 0%) Jul 08 09:00 2009-07-08 14:00:11 hourly_psefiler1_HR_mikeo
0% ( 0%) 0% ( 0%) Jul 07 16:59 2009-07-07 22:00:06 hourly_psefiler1_HR_mikeo
0% ( 0%) 0% ( 0%) Jul 07 15:59 2009-07-07 21:00:07 hourly_psefiler1_HR_mikeo
0% ( 0%) 0% ( 0%) Jul 07 14:59 2009-07-07 20:00:05 hourly_psefiler1_HR_mikeo
--Mike O
Hi Mike --
Yes, you could, but in your example, none of the timestamps actually match. And, depending on your situation, you might not be able to run "snap list", you might only have the snapshot names (e.g. if you just navigate into the .snapshot directory). We get many customers who want to sort out what a given volume, qtree or snapshot represents based on nothing but the name, so embedding a timestamp is useful.
We could consider adding an option to use an index instead of a timestamp, but there are uniqueness issues. We could code around those if we needed to.
-- Pete
Pete,
With regards to the timestamps not matching, my client wondered the same thing. As it turns out, the timestamp on the snapshot names are GMT, even though all of my filers AND all of my datasets are on Central time. As you can see from my earlier post, the date of the snapshot and the timestamp on the snapshot names differs by 5 hours. Perhaps, there is an option that I am missing. But that's another thing that's going to confuse the heck out of the client.
I understand your use case for using the timestamp to sort out the snapshots. However, a common use case is to simply direct the users to the hourly.0 or daily.0 snapshot directory. That is the use case that my particular client is acustom to. And they will have volumes that are backed up outside of protection manager and with the filers' builtin scheduler.
--Mike O
The client pointed out the following issue with the new snapshot names. If you access the CIFS shares in Windows Explorer and go to the ~snapshot directory to perform a user-directed restore, the snapshot names/directories all get truncated:
2009-0~1
2009-0~2
2009-0~3
2009-0~4
Hence, the client cannot determine which snapshot to use. Is 2009-0~1 the latest? At least with hourly.0, hourly.1, etc, they know hourly.0 is the latest.
--Mike O
Hi Mike,
Even i am facing the same problem when i access the ~snapshot directory.
But there is an options to change the"colon" in the snapshots names to "hypen"
after which the snaphost names in the ~ snaphot directory appears fine.
Please find attached the screenshot of my snapshot directory with old and new name.
By default the options value is no change it to yes.
C:\>dfm options set pmUseSDUCompatibleSnapshotNames=yes
Changed use snapshot name compatible with SnapDrive for unix for snapshot created by the protection manager to Yes.
C:\>
Regards
adai
Hi everyone...
I have the same issue with snapvaulted relations.
In my case I had a volume on my primary storage on aggr0, I moved this volume (with snapmirror) to aggr1, reconfigured my DFM policy but since, the snapshot folder on a windows client (2003/7/2008) show trunked (8+2) folders names while snapshot list is OK...
In this DFM policy there is also another volume from another filer... All is fine for that one.
I looked everywhere on the secondary and primary filer vol options, vol lang snapshot list...
DFM server - pmCustomNameUseHostName (yes) pmCustomNameUsePrefix (yes) pmCustomNameUseQtreeList (yes) pmCustomNameUseRetentionType (yes) pmCustomNameUseType (yes) pmCustomNameUseVolumeName (yes), pmUseSDUCompatibleSnapshotNames (yes)...
All ideas are welcome
Best regards,
Paul
Hello All,
I resolved my problem but it's not a solution...
I had to let DFM to create the destination volume itself.
best regards,
Paul
Hi Paul,
You are on a very old release, which I think is already EOS IIRC. I recommend you to move to DFM/OCUM 5.2 where we have changed the naming conventions of snapshots to avoid this problem.
Also we have this feature call secondary volume migration which allows you to migrate destination volumes across or within a controller.
Regards
adai
I'm finally getting back to this one.
When I browse a CIFS share from Windows XP, I see the full snapshot names. This happens both from Windows Explorer and a DOS prompt. Did you do anything special to make your systems show only the 8.3 names?