2011-01-11 02:35 AM
Hi there come across an issue with SnapDrive (6.3) which is causing us a bit of a problem and hope someone on here might be able to help.
We have a netapp cluster running production on one head and pre-production on the other, the networking has been setup so that there are 2 separate vlans for the servers and one cannot see the other.
I am in the process of setting up and exact copy of production to run in pre-prod including ad, application servers and sql servers using snap drive and snap manager for sql all on server 2003.
There are 2 separate clusters setup within the virtual centre and using smvi and powershell (netapp and vmware toolkits) and have managed to get a process which can clone a machine from production to pre-prod.
I am in the process of trying to script the addition of the luns to the pre-prod sql servers and have come across an issue with snap drive.
What it looks like is that snap drive query’s the VC to get the host that the vm is running on then returns the hba list for the esx host, because the machine names are the same in production and pre-prod and snap drive uses this as the node name it returns the wrong server and the incorrect hba's as a result I cannot mount the copied luns onto the pre-prod server using the gui or the sdcli commands.
It is all fixed if I rename the server in pre-prod, they are named differently in the virtual centre and the vms are placed in separate clusters.
Is there a way to either
a) Specify the node name in snap drive without renaming the machine.
b) Somehow filter the info snap drive gets from the VC to only include results from a specified vm cluster
From my testing at the moment it seems to just return the first machine it finds the matches the node name.
Can anyone help or would people think that it is something that could/should be included in snap drive
Long post this one, hope somebody can help out.
2011-01-12 06:56 AM
It may be a good idea to open a case for this so we can start getting the information logged. If it turns out to be a bug then we'll have all the info ready for the dev team to take a look.