Hopefully someone can shed some light on the issue we're having. I'm able to use SMVI to backup NFS and VMFS datatstores/VM machines, but when I do a restore of a VM machine on a VMFS datastore it fails. No restore issues with VM machines that are hosted on NFS, just VMFS.
We're running the following in our environment
SnapManager for Virtual Infrastructure 2.0 (installed on vCenter server)
vSphere 4.0 w/update 1
Messages in the Task pane of SMVI
One or more mount requests did not succeed. Please check vSphere Client for any error messages.
Failure in DatastoreMountAction: One or more mount requests did not succeed. Please check vSphere for any error messages.
Operation transitioning to abort due to prior failure.
Operation failed: One or more mount requests did not succeed. Please check vSphere Client for any error messages.
Messages in the Recent Tasks of vCenter Client
Rescan all HBA
I see these two actions in the vCenter client when i submit the restore job.
We're running the controllers in an active/active environment and have flexclone licensed on both controllers. The VMFS volumes are over Fibre Channel and no iSCSi is used in our environment. Could the issue be somewhere on the fibre channel/switch configuration? Thanks again for all your suggestions and please keep it coming.
Which ESX host are you selecting when you do the VM restore? Does that ESX server have access to the same FC LUN as the VM was backed up from? What will happen is SMVI will FlexClone the backup then connect the FlexCone to the ESX server you give it, then copy the VM from the FlexClone to the source VMFS. Thus it needs access to both the FlexClone and the original VMFS LUN.
I noticed that a new Igroup was created called ESXhostname_smvidg_fcp and i assume this was created during the failed restore process. The igroup settings appear to be identical to the Igroup we created for ESX.