Data Backup and Recovery

SnapProtect SAN transport forcing NBD

MSavage
3,598 Views

We have our SnapProtect v10SP10 setup in a multi tenant environment, our VSA/MA virtual servers are deployed in each customer VLAN's and seperated by a throughput limited firewall from the CommServe, ESX and vCenter severs.  On our infrastructure network where the CommServe/ESX/vcenter is located we deployed a dedicated ESX stand-alone physical proxy and deployed a physical VSA/MA server where every datastore LUN has been mapped to allow SAN transport as we are a FC shop.  When we try to restore data granular file/folder data to customers VM's we are seeing in the logs that the restore job is being forced to use NBD transport.  Now I have read about thin/thick and thought it was forcing this transport due to a thin disk but I have the same result when the vm were restoring to is all thick provisioned disks.  Now as you will read in the following commvault forum post it is stated that this is done by design, but I agree with this if thin disks are present but what if the vm is all thick?  

 

I am trying to understand how to speed up these NBD transfers as the reason we deployed the SAN proxy was to leverage fast FC backup/restores.  

 

Restore flow how I understand it for NBD:  (all our VSA's are MA's)

1) Request is made to the commseve to browse the file index, and the target media agent is contacted to retreive this index.

2) Files are selected to be restored in our case we are restoring back to the VM what was backedup but placing the files in an alternate location/folder

3) The VSA/MA server opens a connection to vCenter via 443 to initiate access

4) The respective snapshot is mounted to the dedicated ESX proxy in our case and the Physical VSA/MA server shoudl be able to access this mounted snapshot volume and then begin the resotre.

 

Now here is where is what we read should happen if your using SAN transport:

5) The mounted snap volume is accessed and the file/folder data is restored to the target datastore/VM

 

Based on beign forced into NBD mode on the restore here is what we are observing:

5) The physical VSA/MA accesses the snapshot/volume and then begins to transfer the data over the network *****

 

What path is this now taking?  Is the restore flow going from the physical VSA/MA server back through the firewall and then to the windows server in the customer segment

OR

Is the path going form the VSA/MA directly to the target ESX host via port 902?  (note in our case since the restore esx, vsa and commserve, vcenter are all on the same infrastructure network if the restore path is back to the target ESX via its only vmkernal "management network" then this shoull be fairly fast.)  What we are experiencing is that the restore is un-acceptabily slow or just times out.

 

Do we need to setup a dedicated backup network between our ESX, vCenter and physical VSA proxy or do we need to extend thsi network into the customer networks and add another vNIC to the remote VSA proxies to speed up this NBD transfer?

 

Why can't we force SAN reguardless of the disk used even though we are penalized by the restore speed, 90% of the data we restore are on thick provisioned disks but back in the day the VM's were created with thin c: volumes to save space.  

 

We keep having a somewhat simular problem as related to the below, where our VM restores are taking forever like hours to restore 5GB of data and sometimes just times out.

 

https://forum.commvault.com/forums/14673/ShowThread.aspx#14673

 

Matt

2 REPLIES 2

subhasha
3,551 Views

Did you try restore exclusively using SAN Transport mode in restore options ?

MSavage
3,509 Views

For VM based "Guest File and Folder" restore option there isn't any option to force the transport mode, under advanced options there are options to change the datapath and to override the ESX used to mount snapshots but nothing relaged to transport mode.

 

What I did to try and force the mode was set the additional option for the VSA server in question I set the vStorageTransportMode to san to force that transport type but then all I get is a pipeline buffe error and the following shows up in the logs.  Looks like vStorage is forcing the mode due to factors that I don't quite understand.. If you read that CommVault forum post I included someone comes out and says that its done that way by design, but why?  Is this a VMware limitation?  can't seem to get a direct answer from anyone.

 

The selected transport mode: [san] doesn't match the transport Mode used by vStorage : [nbd]

Public