Running the following scenario at the moment;
Standard SSP 2.0 provisioning is working fine using the default scripts and templates via SCVMM.
On the SCVMM/SSP server I have the OnCommand Cmdlets installed and added to the host Powershell profile. I have used the NetApp scripts provided and added them to the customised actions in SSP. The SCVMM server has a library share with a generalised VHD and template. All LUNs have been created on the same volume, including the library share that hosts the VHD. For info - the library share is a pass through disk.
When creating a VM in SSP specifying the NetApp ONTapCreateVM.txt and ONTapCreateVMLocked.txt scripts I get the following error;
Operation failed. Exiting progress message processing. Status TemplateCloneEngine::Clone: ValidateVHDDiskPaths failed reason Validate VHD path found that following VHD paths are not on NetApp LUN L:\TemplateVHDs\disk1.vhd
This runs for approx 5 seconds before being generated.
I have attached the WebService.log file.
Appreciate I may not have covered the environment in much detail - but feel free to ask and I'll give more information?
Yes - sorry probably should have clarified that It is most certainly on a NetApp LUN. Perhaps worth noting that the LUN is attached to the SCVMM server as a pass through disk, and has a single share configured (permissions are ok) with the template VHDs stored their. The location I am attempting to clone too is in the same NetApp volume.
Ah. That is most likely your problem.
A pass-thru disk is percieved by windows to be a physical disk. The guest OS where you are running the cmdlets is not aware that this is a NetApp LUN and cannot perform the actions it needs to perform. Try mounting a guest LUN (via iSCSI) and running rapid provisioning against the guest LUN.
This would require the iSCSI network ports to be converted to virtual switches so that the hyper-v hosts and the guest virtual machines could both use the ports on the data network (vNic placed within host OS and vNics added to the library server both hanging of the virtual switch). Can you foresee any issues with this and is this something that you have seen before?
Yes, this works.
However, the recommendation from MSFT and NetApp is to have dedicated NIC's for iSCSI. This is to ensure that you don't cause performance issues by sharing the same NIC..
For a lab environment you will be just fine. However, in production I would prefer to add more NIC's to the server if possible.
The NICs are the ones currently used just for iSCSI, we have two separate paths to each server each with there own subnet/vLAN patched to switches used just for Layer 2 traffic connected directly to the filers (Server<->Switch<->Filer). Part of the switch is segregated for CSV/Migration but that is on a seperate NIC pair within the server.
So with the NICs being used just for iSCSI the volume traffic would be the same (Just host vs. Host and Guests) as we would be changing an iSCSI host mount disk passed to the VM as a pass through disk, to a disk directed mounted via iSCSI in the guest VM; the only difference I can see would be the overhead of the vSwitch. Should this be seem as a risk within a production environment?
You may see increased latency due to the virtual switch inside of Hyper-V but that's going to be unavoidable unless you deploy a physical SCVMM server or add another dedicated NIC just for the guests to use. SCVMM won't be putting much load on the LUN if you are just using it as a cloning source. If you're doing "traditional" BITS based deployments with SCVMM you will see pretty high read loads on the LUN.
If you check the ""Allow Management Operating System to share this adapter" checkbox, what actually happens is that the host gets a virtualized NIC that it can use. That NIC may have more latency than a "real" physical NIC. If all your VM's are using this NIC for their VHD's, this could reduce the overall system performance.
You will be able to see this in Perfmon. If you look at the logical disk object and average disk latency counters, you should take a baseline now and then compare to running through the v-nic. That will tell you if there is a signiciant impact.
Thanks for the help and advise.
Have two clusters one which we are using the OnCommand scripts within the SSP2 portal and one with normal bits templates via the portal. Using this as a technology demo for management so would be good to start the bits deployment at the start of the demo and then the rapid provisioning half way through and see the rapid still finish first .
Will implement the above solution tomorrow and let you know how it goes.
Again thanks for the help
I implemented the above today - directly mounted the LUN inside of the SCVMM server via iSCSI and put the template VHD in a library share located on that drive. I then created a new template and imported that into SSP 2.0
I now get a difference error message;
Operation failed. Exiting progress message processing. Status An error occured while processing the Create Storage. The requested name is valid, but no data of the requested type was found
The ONTapCreateVMLocked.txt script that it is utilising is running the following line that seems to be pertinent;
$nvhd = new-clone -verbose -Server $VMHost.Name -vmmserver $VMMServer.Name -Template $templateName -JustCloneVHD -BaseVMName $VMName
I notice this is not using a -mountpoint or any other attribute.
What we are trying to achieve is vhd cloning. The hosts have three CSV's on seperate volumes. C:\ClusterStorage\Volume1 in on the same volume as the library shares LUN and is set on the hyper-v hosts as the default vhd location. We what to get the the stage where the config files are on Volume3 and the vhd's are cloned to volume1.
Any help or pointers on the syntax we need for be great recieved.
I've passed your log file on to the dev team to see what's going on there. Thanks for the file.
In the meantime, do you mind doing some troubleshooting for us?
In PowerShell, can you see how far the command gets? There are sub-commands that perform each function in turn. That will help to see where this is breaking.
New-Storage -StoragePath scceastfl1:/vol/vol_hyperv_primary/TestFromPS -Size 800gb -Mountpoint T:\
clone-file L:\TemplateVHDs\disk1.vhd T:\CloneFromPS.vhd
This will tell us if we can create a new LUN (we create a 800gb lun and mount it on T: in the first command) and if we can perform a sub-lun clone. What the New-Clone command does is to create a new LUN (which from the log looks like it succeeds) and then performs a sub-lun clone (which Doesn't seem to succeed). If the two commands above work, then we probably have a bug in the SSP script it's self.
Again, thanks for your help in working out the kinks in the beta.
This is the output from trying to create storage from the SCVMM machine which has the LUN attached via iSCSI (This is a hyper-v VM):
PS H:\> New-Storage -verbose -StoragePath scceastfl1:/vol/vol_hyperv_primary/TestFromPS -Size 50gb -Mountpoint T:\
VERBOSE: Starting New-Storage
VERBOSE: Performing operation "New-Storage" on Target "SCCVMM01".
VERBOSE: The user confirmed the Input parameters, proceeding with New-Storage.
VERBOSE: Processing New-Storage...
VERBOSE: SCCVMM01:Starting CreateStorage operation
New-Storage : Operation failed. Exiting progress message processing. Status An error occured while processing the Creat
The requested name is valid, but no data of the requested type was found
At line:1 char:12
+ New-Storage <<<< -verbose -StoragePath scceastfl1:/vol/vol_hyperv_primary/TestFromPS -Size 50gb -Mountpoint T:\
+ CategoryInfo : InvalidArgument: (NetApp.SystemCenter.ScNewStorage:ScNewStorage) [New-Storage], Exceptio
+ FullyQualifiedErrorId : NetApp.SystemCenter.ScNewStorage
OK. Thanks for the logs. Looks like we have a bug or a configuration problem with the storage creation part of the process.
If you're up for additional troubleshooting, here is what I would like to try (in order):
1) Are any of the other cmdlets working on this VM? Can you run Get-Storage?
2) Is the OCPM Service running? Under what user credentials? Is this user a local admin? A domain admin?
3) What credentials do you have stored for the filer? Can you do a Get-StorageSystemCredential command? What's stored? Just the IP or the IP and name?
4) What happens if you try this against a different Volume? Is the target volume a flexvol? What version of DOT are you running?
5) What happens if you try this on a Physical host? Can you use New-Storage to create a LUN on the physical host?
If you like, you can Private-Message me (via my user profile in communities) and we can do a 1:1 troubleshooting session via WebEx.
Thanks for the response - unfortunately over in the UK and away from the office now so will run through these troubleshooting steps in the morning If you are available tomorrow morning (US time) and I have had no further success - I will take you up on the WebEx offer.
To answer a couple of the points;
One more point - the SSP infrastructure is installed on the SCVMM server.
Will confirm anything outstanding tomorrow.
I spoke with Alex last night (UK) and took him up on the kind offer to WebEx and look into the issue further. We installed the netapps DOT powershell tool kit and had some problems with that. Alex has took some screenshots etc and waiting feedback form the dev team.
Steve Winfield is going to reach out to you. He's our local guy in the UK. Should be easier to coordinate with someone on your timezone.
In discussions with the dev team this morning, they said that this error could be caused by timeouts from the filer.
Based on our experience with the PS toolkit, I'm wondering if there is a connectivity issue between your enviornment and the filer. This would be on the management network, not on the iSCSI network. Can you check the connectivity and make sure we're not having a connection issue? Perhaps check to see how heavily loaded the management NIC on the filer is? Or the configuration of the network between host and filer?
Another option would be to switch over to a "local" account on the filer. I notice that the stored credential you are using is a domain account. Perhaps the filer is loosing it's connection to the DC?
Since the PS toolkit is shipping code, that error we saw was not normal. There's something preventing us from talking to the filer reliably.
It apears the issue may be the filer like you said we have run PS tools against another filer and they did not come back with the errors we experanced. So now we are trying to get some storage allocated on the working filer to try the scripts again.
We will keep you posted on the progress
We have new storage on another filer which runs the DAT PS tools fine. We are still having issues with thin/rapid provisioning. I have run new-clone manually and the output is as follows. the error is the same if i run new-storage, however clone file is working.
Any more thoughts on this would be appreciated.
PS C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Accessories\Windows PowerShell> new-clone -Server scchvcl01 -VM
Server sccvmm01 -Template 'Standard Test EFL2' -BaseVMName SCCVMTEST567 -MountPoint 'c:\ClusterStorage\primary_csv_efl2
VERBOSE: Starting New-Clone
VERBOSE: Processing New-Clone Inputs ...
VERBOSE: Proceeding with New-Clone. The user confirmed the Input parameters.
VERBOSE: SCCVMM01:Starting Clone operation
VERBOSE: SCCVMM01:Retrieving Template details from SCVMM server...
VERBOSE: SCCVMM01:Querying template information from the SCVMM host
VERBOSE: SCCVMM01:Template information query completed
VERBOSE: SCCVMM01:Querying VHD information from the SCVMM host
VERBOSE: SCCVMM01:VHD information query completed
VERBOSE: SCCVMM01:Retrieving Template details from SCVMM server done
VERBOSE: SCCVMM01:Validating VHD type...
VERBOSE: SCCVMM01:Validating VHD type done
VERBOSE: SCCVMM01:Retrieving VHD disk paths...
VERBOSE: SCCVMM01:Retrieving VHD disk paths done
VERBOSE: SCCVMM01:Validating storage system model...
VERBOSE: SCCVMM01:Validating storage system model done
VERBOSE: SCCVMM01:Validating storage system ontap version...
VERBOSE: SCCVMM01:Validating storage system ontap version done
VERBOSE: SCCVMM01:Validating storage system LUN OSType...
VERBOSE: SCCVMM01:Validating storage system Lun OSType done
VERBOSE: SCCVMM01:Validating existence of flex clone license...
VERBOSE: SCCVMM01:Validating existence of flex clone license done
VERBOSE: SCCVMM01:Quering the existing VM names for the target host SCCHVCL01N1
VERBOSE: SCCVMM01:VM query completed for target host SCCHVCL01N1
VERBOSE: SCCVMM01:Checking the size of destination lun(s)
VERBOSE: SCCVMM01:Creating new lun 'scceastfl2:/vol/ContentManagementEast_Vol/scchvcl01_primary_library_clone_1' of
VERBOSE: Clone operation failed to complete. Error The pipeline has been stopped.
New-Clone : Operation failed. Exiting progress message processing. Status An error occured while processing the Create
The requested name is valid, but no data of the requested type was found
At line:1 char:10
+ new-clone <<<< -Server scchvcl01 -VMMServer sccvmm01 -Template 'Standard Test EFL2' -BaseVMName SCCVMTEST567 -MountP
oint 'c:\ClusterStorage\primary_csv_efl2' -Verbose
+ CategoryInfo : InvalidArgument: (NetApp.SystemCenter.ScCloneCmdlet:ScCloneCmdlet) [New-Clone], Exceptio
+ FullyQualifiedErrorId : NetApp.SystemCenter.ScCloneCmdlet
Got a response from the dev team and can you please confirm?
Is this a library share on a passthru lun? Its something that hasn(t been tested and is currently not supported. Can you try using a library share on a different location that is not a passthru?