Microsoft Virtualization Discussions
Microsoft Virtualization Discussions
Hello,
Running the following scenario at the moment;
Standard SSP 2.0 provisioning is working fine using the default scripts and templates via SCVMM.
On the SCVMM/SSP server I have the OnCommand Cmdlets installed and added to the host Powershell profile. I have used the NetApp scripts provided and added them to the customised actions in SSP. The SCVMM server has a library share with a generalised VHD and template. All LUNs have been created on the same volume, including the library share that hosts the VHD. For info - the library share is a pass through disk.
When creating a VM in SSP specifying the NetApp ONTapCreateVM.txt and ONTapCreateVMLocked.txt scripts I get the following error;
Operation failed. Exiting progress message processing. Status TemplateCloneEngine::Clone: ValidateVHDDiskPaths failed reason Validate VHD path found that following VHD paths are not on NetApp LUN L:\TemplateVHDs\disk1.vhd
This runs for approx 5 seconds before being generated.
I have attached the WebService.log file.
Appreciate I may not have covered the environment in much detail - but feel free to ask and I'll give more information?
Kind Regards
Hi Richard,
I was able to find a few similarities in the errors and logs to some internal burts. I will see if we can get more details on this but it looks like it may be due to dns resolution, ip , or something of that nature. I've actually seen a similar error like this in the previous 2.1.1 version which turned out to be an issue with the location of my library share on a scvmm vm. Not sure if this is the same thing but will request more info and get back to you.
Thanks for the feedback
Thanks Watan - appreciate that. Will be sure to keep up to date with feedback once this issue is resolved
Is L:\ on NetApp lun?
Yes - sorry probably should have clarified that It is most certainly on a NetApp LUN. Perhaps worth noting that the LUN is attached to the SCVMM server as a pass through disk, and has a single share configured (permissions are ok) with the template VHDs stored their. The location I am attempting to clone too is in the same NetApp volume.
I think we only support the clone process if its a vhd as we can't see inside the pass-through lun but will get somebody to chime in. This sounds similar to the issue I had in my env. Also, just to be sure, we had a issue in 2.1.1 where a restart of the SSP service was required. Can you please confirm?
http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=459525
Ah. That is most likely your problem.
A pass-thru disk is percieved by windows to be a physical disk. The guest OS where you are running the cmdlets is not aware that this is a NetApp LUN and cannot perform the actions it needs to perform. Try mounting a guest LUN (via iSCSI) and running rapid provisioning against the guest LUN.
Alex
This would require the iSCSI network ports to be converted to virtual switches so that the hyper-v hosts and the guest virtual machines could both use the ports on the data network (vNic placed within host OS and vNics added to the library server both hanging of the virtual switch). Can you foresee any issues with this and is this something that you have seen before?
Yes, this works.
However, the recommendation from MSFT and NetApp is to have dedicated NIC's for iSCSI. This is to ensure that you don't cause performance issues by sharing the same NIC..
For a lab environment you will be just fine. However, in production I would prefer to add more NIC's to the server if possible.
Alex
The NICs are the ones currently used just for iSCSI, we have two separate paths to each server each with there own subnet/vLAN patched to switches used just for Layer 2 traffic connected directly to the filers (Server<->Switch<->Filer). Part of the switch is segregated for CSV/Migration but that is on a seperate NIC pair within the server.
So with the NICs being used just for iSCSI the volume traffic would be the same (Just host vs. Host and Guests) as we would be changing an iSCSI host mount disk passed to the VM as a pass through disk, to a disk directed mounted via iSCSI in the guest VM; the only difference I can see would be the overhead of the vSwitch. Should this be seem as a risk within a production environment?
You may see increased latency due to the virtual switch inside of Hyper-V but that's going to be unavoidable unless you deploy a physical SCVMM server or add another dedicated NIC just for the guests to use. SCVMM won't be putting much load on the LUN if you are just using it as a cloning source. If you're doing "traditional" BITS based deployments with SCVMM you will see pretty high read loads on the LUN.
If you check the ""Allow Management Operating System to share this adapter" checkbox, what actually happens is that the host gets a virtualized NIC that it can use. That NIC may have more latency than a "real" physical NIC. If all your VM's are using this NIC for their VHD's, this could reduce the overall system performance.
You will be able to see this in Perfmon. If you look at the logical disk object and average disk latency counters, you should take a baseline now and then compare to running through the v-nic. That will tell you if there is a signiciant impact.
Alex
Thanks for the help and advise.
Have two clusters one which we are using the OnCommand scripts within the SSP2 portal and one with normal bits templates via the portal. Using this as a technology demo for management so would be good to start the bits deployment at the start of the demo and then the rapid provisioning half way through and see the rapid still finish first .
Will implement the above solution tomorrow and let you know how it goes.
Again thanks for the help
Richard
Yes, please do let us know.
Also, please be sure to review the rapid provisioning restrictions post in this forum.
Alex
Morning,
I implemented the above today - directly mounted the LUN inside of the SCVMM server via iSCSI and put the template VHD in a library share located on that drive. I then created a new template and imported that into SSP 2.0
I now get a difference error message;
Operation failed. Exiting progress message processing. Status An error occured while processing the Create Storage. The requested name is valid, but no data of the requested type was found
The ONTapCreateVMLocked.txt script that it is utilising is running the following line that seems to be pertinent;
$nvhd = new-clone -verbose -Server $VMHost.Name -vmmserver $VMMServer.Name -Template $templateName -JustCloneVHD -BaseVMName $VMName
I notice this is not using a -mountpoint or any other attribute.
What we are trying to achieve is vhd cloning. The hosts have three CSV's on seperate volumes. C:\ClusterStorage\Volume1 in on the same volume as the library shares LUN and is set on the hyper-v hosts as the default vhd location. We what to get the the stage where the config files are on Volume3 and the vhd's are cloned to volume1.
Any help or pointers on the syntax we need for be great recieved.
Richard
I've passed your log file on to the dev team to see what's going on there. Thanks for the file.
In the meantime, do you mind doing some troubleshooting for us?
In PowerShell, can you see how far the command gets? There are sub-commands that perform each function in turn. That will help to see where this is breaking.
Try:
New-Storage -StoragePath scceastfl1:/vol/vol_hyperv_primary/TestFromPS -Size 800gb -Mountpoint T:\
clone-file L:\TemplateVHDs\disk1.vhd T:\CloneFromPS.vhd
This will tell us if we can create a new LUN (we create a 800gb lun and mount it on T: in the first command) and if we can perform a sub-lun clone. What the New-Clone command does is to create a new LUN (which from the log looks like it succeeds) and then performs a sub-lun clone (which Doesn't seem to succeed). If the two commands above work, then we probably have a bug in the SSP script it's self.
Again, thanks for your help in working out the kinks in the beta.
Alex
Hi
This is the output from trying to create storage from the SCVMM machine which has the LUN attached via iSCSI (This is a hyper-v VM):
PS H:\> New-Storage -verbose -StoragePath scceastfl1:/vol/vol_hyperv_primary/TestFromPS -Size 50gb -Mountpoint T:\
VERBOSE: Starting New-Storage
VERBOSE: Performing operation "New-Storage" on Target "SCCVMM01".
VERBOSE: The user confirmed the Input parameters, proceeding with New-Storage.
VERBOSE: Processing New-Storage...
VERBOSE: SCCVMM01:Starting CreateStorage operation
New-Storage : Operation failed. Exiting progress message processing. Status An error occured while processing the Creat
e Storage.
The requested name is valid, but no data of the requested type was found
At line:1 char:12
+ New-Storage <<<< -verbose -StoragePath scceastfl1:/vol/vol_hyperv_primary/TestFromPS -Size 50gb -Mountpoint T:\
+ CategoryInfo : InvalidArgument: (NetApp.SystemCenter.ScNewStorage:ScNewStorage) [New-Storage], Exceptio
n
+ FullyQualifiedErrorId : NetApp.SystemCenter.ScNewStorage
Attached log.
Cheers
Richard
Is SCVMM on a VM or physical node? What operating system is it?
OK. Thanks for the logs. Looks like we have a bug or a configuration problem with the storage creation part of the process.
If you're up for additional troubleshooting, here is what I would like to try (in order):
1) Are any of the other cmdlets working on this VM? Can you run Get-Storage?
2) Is the OCPM Service running? Under what user credentials? Is this user a local admin? A domain admin?
3) What credentials do you have stored for the filer? Can you do a Get-StorageSystemCredential command? What's stored? Just the IP or the IP and name?
4) What happens if you try this against a different Volume? Is the target volume a flexvol? What version of DOT are you running?
5) What happens if you try this on a Physical host? Can you use New-Storage to create a LUN on the physical host?
If you like, you can Private-Message me (via my user profile in communities) and we can do a 1:1 troubleshooting session via WebEx.
Alex
Thanks for the response - unfortunately over in the UK and away from the office now so will run through these troubleshooting steps in the morning If you are available tomorrow morning (US time) and I have had no further success - I will take you up on the WebEx offer.
To answer a couple of the points;
One more point - the SSP infrastructure is installed on the SCVMM server.
Will confirm anything outstanding tomorrow.
Cheers
I spoke with Alex last night (UK) and took him up on the kind offer to WebEx and look into the issue further. We installed the netapps DOT powershell tool kit and had some problems with that. Alex has took some screenshots etc and waiting feedback form the dev team.
Steve Winfield is going to reach out to you. He's our local guy in the UK. Should be easier to coordinate with someone on your timezone.
In discussions with the dev team this morning, they said that this error could be caused by timeouts from the filer.
Based on our experience with the PS toolkit, I'm wondering if there is a connectivity issue between your enviornment and the filer. This would be on the management network, not on the iSCSI network. Can you check the connectivity and make sure we're not having a connection issue? Perhaps check to see how heavily loaded the management NIC on the filer is? Or the configuration of the network between host and filer?
Another option would be to switch over to a "local" account on the filer. I notice that the stored credential you are using is a domain account. Perhaps the filer is loosing it's connection to the DC?
Since the PS toolkit is shipping code, that error we saw was not normal. There's something preventing us from talking to the filer reliably.
Alex