Subscribe

Cmdlets - SSP 2.0

Hello,

Running the following scenario at the moment;

  • Server Core 2008 R2 w/ Hyper-V
  • System Center Virtual Machine Manager 2008 R2 SP1
  • System Center SSP 2.0 SP1 (installed on SCVMM server)
  • System Center Operations Manager 2007 R2 Cu4
  • NetApp filer - not sure of exact version because the storage guy has just left - but he reviewed requirements and said it was fine


Standard SSP 2.0 provisioning is working fine using the default scripts and templates via SCVMM.

On the SCVMM/SSP server I have the OnCommand Cmdlets installed and added to the host Powershell profile. I have used the NetApp scripts provided and added them to the customised actions in SSP. The SCVMM server has a library share with a generalised VHD and template. All LUNs have been created on the same volume, including the library share that hosts the VHD. For info - the library share is a pass through disk.

When creating a VM in SSP specifying the NetApp ONTapCreateVM.txt and ONTapCreateVMLocked.txt scripts I get the following error;

Operation failed. Exiting progress message processing. Status TemplateCloneEngine::Clone: ValidateVHDDiskPaths failed reason Validate VHD path found that following VHD paths are not on NetApp LUN L:\TemplateVHDs\disk1.vhd

This runs for approx 5 seconds before being generated.

I have attached the WebService.log file.

Appreciate I may not have covered the environment in much detail - but feel free to ask and I'll give more information?

Kind Regards

Cmdlets - SSP 2.0

Hi Richard,

I was able to find a few similarities in the errors and logs to some internal burts.  I will see if we can get more details on this but it looks like it may be due to dns resolution, ip , or something of that nature.  I've actually seen a similar error like this in the previous 2.1.1 version which turned out to be an issue with the location of my library share on a scvmm vm. Not sure if this is the same thing but will request more info and get back to you. 

Thanks for the feedback

Cmdlets - SSP 2.0

Thanks Watan - appreciate that. Will be sure to keep up to date with feedback once this issue is resolved

Cmdlets - SSP 2.0

Is L:\ on NetApp lun?

Re: Cmdlets - SSP 2.0

Yes - sorry probably should have clarified that It is most certainly on a NetApp LUN. Perhaps worth noting that the LUN is attached to the SCVMM server as a pass through disk, and has a single share configured (permissions are ok) with the template VHDs stored their. The location I am attempting to clone too is in the same NetApp volume.

Re: Cmdlets - SSP 2.0

I think we only support the clone process if its a vhd as we can't see inside the pass-through lun but will get somebody to chime in.  This sounds similar to the issue I had in my env.  Also, just to be sure,  we had a issue in 2.1.1 where a restart of the SSP service was required. Can you please confirm?

http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=459525

Re: Cmdlets - SSP 2.0

Ah.  That is most likely your problem.

A pass-thru disk is percieved by windows to be a physical disk.  The guest OS where you are running the cmdlets is not aware that this is a NetApp LUN and cannot perform the actions it needs to perform.  Try mounting a guest LUN (via iSCSI) and running rapid provisioning against the guest LUN.

Alex

Re: Cmdlets - SSP 2.0

This would require the iSCSI network ports to be converted to virtual switches so that the hyper-v hosts and the guest virtual machines could both use the ports on the data network (vNic placed within host OS and vNics added to the library server both hanging of the virtual switch). Can you foresee any issues with this and is this something that you have seen before?

Re: Cmdlets - SSP 2.0

Yes, this works. 

However, the recommendation from MSFT and NetApp is to have dedicated NIC's for iSCSI.  This is to ensure that you don't cause performance issues by sharing the same NIC..

For a lab environment you will be just fine.  However, in production I would prefer to add more NIC's to the server if possible.

Alex

Re: Cmdlets - SSP 2.0

The NICs are the ones currently used just for iSCSI, we have two separate paths to each server each with there own subnet/vLAN patched to switches used just for Layer 2 traffic connected directly to the filers (Server<->Switch<->Filer). Part of the switch is segregated for CSV/Migration but that is on a seperate NIC pair within the server.

So with the NICs being used just for iSCSI the volume traffic would be the same (Just host vs. Host and Guests) as we would be changing an iSCSI host mount disk passed to the VM as a pass through disk, to a disk directed mounted via iSCSI in the guest VM; the only difference I can see would be the overhead of the vSwitch. Should this be seem as a risk within a production environment?