Subscribe

SnapVault/Protection Manager Preferred Data Connection

How does the preferred data conection work or better why does it not work.

I am implementing a B2D Concept at the moment  with OM/PM 3.7.1P3 and SnapVault as the Backup protocol.

  • nsb001 and nsb002 are my SnapVault secondary systems
  • ns0069 is one of my 60 primary systems
  • ns0068 is also one of my primary systems

  • on ns0069 I have configured the ndmp preferred interface but PM can not initiated the snapvault realtions when these option is configured.
  • on ns0068 the option works but on the secondary system the incoming SV traffic is going through the admin network.

b2d_1.JPG

b2d_2.JPG

b2d_3.JPG

My main question is how can I configure my Systems and Protection Manager to ensure that my SnapVault traffic I going over a dedicated network interface on the primary and also on the secondary side, even if the secondary is reachable over differnt interfaces. And also why is PM getting a preferred interface 0.0.0.0.

Thanks in advance

Michael

Options and Config nsb002:

ndmpd.preferred_interface    vif1       (value might be overwritten in takeover)

nsb002> rdfile /etc/rc
#Auto-generated by setup Thu Mar 12 12:22:50 GMT 2009
hostname nsb002
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.240 -wins partner e0a
vif create multi vif1-active -b ip e2a
vif create multi vif1-passive -b ip e3a
vif create single vif1 vif1-active vif1-passive
vif favor vif1-active
ifconfig vif1 10.68.208.71 netmask 255.255.252.0 partner vif1
route add default 10.66.213.65 1
routed on
options dns.domainname wdf.sap.corp
options dns.enable on
options nis.enable off
savecore

http://asup-search.corp.netapp.com/asupdw/asupEmails/20090408/dataInfo/html/index-AE200904087792.htm

Options and Config OM/PM:

pmAutomaticSecondaryVolMaxSizeMb  5242880
pmQSMBackupPreferred              No

Options and Config ns0069:

ndmpd.preferred_interface    vif1       (value might be overwritten in takeover)

ns0069> rdfile /etc/rc
#Auto-generated by setup Wed May 28 06:48:30 GMT 2008
hostname ns0069
vif create multi vif2 -b ip e0a e0c
ifconfig vif2 `hostname`-vif2 mediatype auto netmask 255.255.240.0 partner vif2
#
vif create multi vif1-active -b ip e9
vif create multi vif1-passive -b ip e0b e0d e0f
vif create single vif1 vif1-active vif1-passive
vif favor vif1-active
ifconfig vif1 10.70.1.16 netmask 255.255.0.0 partner vif1
ifconfig vif2 10.66.213.10 netmask 255.255.240.0 -wins partner vif2
#
route add default 10.70.1.1 1
routed on
options dns.domainname wdf.sap.corp
options dns.enable on
options nis.enable off
savecore

e0a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:05:c4:00 (auto-1000t-fd-up) flowcontrol full
trunked vif2
e0b: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:05:c4:03 (auto-1000t-fd-up) flowcontrol full
trunked vif1-passive
e0c: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:05:c4:00 (auto-1000t-fd-up) flowcontrol full
trunked vif2
e0d: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:05:c4:03 (auto-1000t-fd-up) flowcontrol full
trunked vif1-passive
e0e: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:05:c4:04 (auto-unknown-cfg_down) flowcontrol full
e0f: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:05:c4:03 (auto-1000t-fd-up) flowcontrol full
trunked vif1-passive
e8a: flags=1108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:07:43:05:46:e8 (auto-10g_sr-fd-cfg_down) flowcontrol full
e8b: flags=1108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:07:43:05:46:e9 (auto-10g_sr-fd-cfg_down) flowcontrol full
e9: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:05:c4:03 (auto-10g_sr-fd-up) flowcontrol full
trunked vif1-active
lo: flags=1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160
inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1
ether 00:00:00:00:00:00 (VIA Provider)
vif2: flags=4948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
inet 10.66.213.10 netmask 0xfffff000 broadcast 10.66.223.255
partner vif2 (not in use)
ether 02:a0:98:05:c4:00 (Enabled virtual interface)
vif1-active: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:05:c4:03 (Enabled virtual interface)
trunked vif1
vif1-passive: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:05:c4:03 (Enabled virtual interface)
trunked vif1
vif1: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
inet 10.70.1.16 netmask 0xffff0000 broadcast 10.70.255.255
partner vif1 (not in use)
ether 02:a0:98:05:c4:03 (Enabled virtual interface)

http://asup-search.corp.netapp.com/asupdw/asupEmails/20090408/dataInfo/html/index-AE200904087786.htm

Options and Config ns0068:

ndmpd.preferred_interface    vif1       (value might be overwritten in takeover)

http://asup-search.corp.netapp.com/asupdw/asupEmails/20090408/dataInfo/html/index-AE200904087786.htm

Re: SnapVault/Protection Manager Preferred Data Connection

Hi Micheal,

Protection manager honors the NDMP preferred interface setting on storage system primary when performing SnapVault backup and on the storage system secondary when restoring to a storage system primary.

When performing SnapVault backup, if the ndmpd.preferred_interface option on a primary storage system is set, protection manager 3.6 or later uses only that interface for SnapVault data transfer. If the secondary storage system cannot connect to that interface's IP address, backup job will fail. protection manager will not use other available IP addresses (including the primary address of the storage system) for SnapVault data transfer.

If ndmpd.preferred_interface option on a primary storage system is not set, protection manager 3.6 or later uses the primary address of the storage system for SnapVault data transfer. If the secondary storage system cannot connect to the primary IP address, backup job will fail. protection manager will not use other available IP addresses for SnapVault data transfer.

When performing SnapVault restore, the ndmpd.preferred_interface setting on the secondary storage system is used in the similar manner as described above.

Default behavior described above can be overridden by setting a global option ndmpDataUseAllInterfaces (Dfm options set <option_name>=value) to yes. In that case, DFM will try to use all available IP addresses for SnapVault data transfer.

protection manager does not use the NDMP preferred interface setting on Open Systems SnapVault primary when performing SnapVault backups.

protection manager does not use the NDMP preferred interface setting on SnapVault secondary storage system when restoring to Open Systems SnapVault primary.

Thanks and regards

Shiva Raja