Data Backup and Recovery
Data Backup and Recovery
Hi,
I have a problem with Snapdrive for Unix 5.0. When I start a "snap connect" SDU 5.0 doesn't correctly create NFS export.
Here is my configuration : a Netapp with 2 interfaces, one for management (10.0.38.10) and one for data (192.168.0.10). Everything is properly configured in SDU :
[root@lab-oracle ~]# snapdrive config list
username appliance name appliance type
---------------------------------------------------
root 3020demo1 StorageSystem
[root@lab-oracle ~]# snapdrive config list -mgmtpath
system name management interface datapath interface
-------------------------------------------------------
3020demo1 10.0.38.10 192.168.0.10
Here is the problem
1/ Create a snapshot
[root@lab-oracle log]# snapdrive snap create -fs /home/oracle/app/oracle/oradata/orcl/data -snapname test
Starting snap create /home/oracle/app/oracle/oradata/orcl/data
WARNING: DO NOT CONTROL-C!
If snap create is interrupted, incomplete snapdrive
generated data may remain on the filer volume(s)
which may interfere with other snap operations.
Successfully created snapshot test on 192.168.0.10:/vol/LABORACLE_DB
snapshot test contains:
file system: /home/oracle/app/oracle/oradata/orcl/data
filer directory: 192.168.0.10:/vol/LABORACLE_DB
2/ Mount the snapshot
[root@lab-oracle log]# snapdrive snap connect -fs /home/oracle/app/oracle/oradata/orcl/data /mnt/test -snapname 192.168.0.10:/vol/LABORACLE_DB:test
connecting /mnt/test
to filer directory: 192.168.0.10:/vol/LABORACLE_DB_0
Volume copy 192.168.0.10:/vol/LABORACLE_DB_0 ... created
(original: LABORACLE_DB)
Successfully connected to snapshot 192.168.0.10:/vol/LABORACLE_DB:test
file system: /mnt/test
filer directory: 192.168.0.10:/vol/LABORACLE_DB_0
3/ Snapdrive 4.2 properly create NFS export with root=192.168.0.57:10.0.38.57, not SDU 5.0 !
SDU 4.2
/vol/LABORACLE_DB_0 -sec=sys,rw,root=192.168.0.57:10.0.38.57
SDU 5.0
/vol/LABORACLE_DB_0 -sec=sys,rw
This causes problems with SMO jobs especially during verification or clone because SMO has no right to write on the clone
server.log.8:2012-02-20 16:47:25,677 [btpool0-19 - /smo_v9/services/SMO] [DEBUG]: OperationCycle status: FAILED, rootErrorCode: 13032, rootErrorMessage: SMO-13032: Cannot perform operation: Backup Create. Root cause: SMO-11007: Error cloning from Snapshot copy: FLOW-11019: Failure in ExecuteConnectionSteps: SD-10038: File system /opt/NetApp/smo/mnt/-home-oracle-app-oracle-oradata-orcl-ctrl2-20120220164625801_0 is not writable. Please ensure that the SnapManager process has write access to the file system. After correcting this, you may need to take another snapshot., opId: ff808081359b707401359b707db20001
Any idea ? Bug or not ?
Solved! See The Solution
Got news from Netapp support : this SDU 5.0 bug will be fixed soon
Got news from Netapp support : this SDU 5.0 bug will be fixed soon
All,
this has been reported in BURT 57541: http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=575741&app=portal
Title :
SnapDrive 5.0 for UNIX operations may fail in NFS environments as SnapDrive prevents a clone volume from inheriting the exports option of the parent volume
Workaround : Modify the export option of the parent volume and replace b root=<hostname/IP>b with b anon=0b and then retry the SnapDrive or SnapManager operation.
Notes : The above defect will be fixed in the P1 patch of SnapDrive 5.0 for UNIX