VMware Solutions Discussions

Feedback requested for Virtual Storage Console 2.0 Users

robertim
35,197 Views

We're looking for feedback from  customers that have downloaded Virtual Storage Console and are using it. Any and all comments and feedback welcome.

128 REPLIES 128

fishmannn
6,684 Views

Does anyone know why you cannot provision storage with VSC on ESX hosts that are in maintenance mode? We just updated to 2.0.1 P1 and still see this. I see no reason why you shouldn't be able to add storage to a host in MM.

The more I use VSC the more I don't think it is ready for prime time at least the provisioning piece. It is difficult to keep what it sees for LUN's in sync with the filers. Many times I hit UPDATE and it goes into an endless DISCOVERING NETWORK COMPONENTS. How do you stop one of these runaway updates? Even when it does work the storage details - SAN is not accurate with what is on the filers. I like the IDEA of being able to provision datastores to an entire cluster and the tool is supposed to put all the ESX initiators into a group on the filer.. in practice I'm not able to get this to work. Why can I not give new volumes a seperate name from a new LUN in VSC? Note we are using all FCP based storage which isn't as full featured as NFS unfortunately.

martialalmac1
6,684 Views

I'm having the same error messages in event viewer every 30 minutes. Did anyone else get a resolution?

Incidentally the backups seem to be working fine and apart from having to register the install a couple of times to get it working didn't have too many problems with the VSC install.

Source:        SMVI
Date:          25/01/2011 06:00:00
Event ID:      4096
Task Category: Error
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      vcentre.canmore-housing.org.uk
Description:
335966317 [DefaultQuartzScheduler_Worker-1] ERROR com.netapp.smvi.security.authentication.vsphere.VSphereAuthenticator - Failed to authenticate vSphere sessionId %s

cgrossman
6,256 Views

Installed VSC2.0.1, and SnapDrive 6.3 on the virtual W2K8 machines.

VSC could be great, but it causes major hangups.  It installs OK, and runs OK, but it messes things up when it takes a snapshot.  Here's the scenario.

W2k8 server running SnapDrive 6.3.  System partition (Drive C) is a VMDK inside an NFS datastore. Data partition (Drive E) is an iSCSI LUN, mapped using SnapDrive 6.3 and using the ESX iSCSI adapter.  This creates an RDM (Raw Device Mapping) in ESX.  However, in order for RDM to work, it needs a mapping vmdk file on a VMFS datastore (not NFS).  So, I create a VMFS datastore called ESX_LUN_Mappings, and use it to store the mapping vmdk files for the RDM.  The virtual machine works fine in this configuration.

When VSC goes to take a snapshot, the VMWare VSS provider quiesces the snapshot, but then hangs, as VSC proceeds to ADD another virtual hard drive, with the same mapping as the E drive, except to the wrong datastore.  In my case, it's creating/mapping it to the datastore of the local storage on the ESX host.  It seems to keep doing this every day, compounding the number of invalid virtual drives with incorrect mappings.  Is there something wrong with VSC, or am I configuring it wrong?  I opened a support case with NetApp months ago, but am not getting much response from them.

Thanks for reading.

watan
6,256 Views

Do you have a support ticket open for this issue?   Please open a case so they can start requesting logs and have the issue documented.

If you can msg or email me the case #  I can get our Snanpdrive dev/qa team to take a look at this issue.

cgrossman
6,256 Views

Hi,

The case number is 2001874207.  I keep waiting for someone to get back to me.

Carlo Grossman

Senior I.T. Analyst, MCSE, CCNA

City of Turlock

>>> watan <xdl-communities@netapp.com> 1/26/2011 11:18 AM >>>

Feedback requested for Virtual Storage Console 2.0 Users

reply from watan in Virtualization - View the full discussion

Do you have a support ticket open for this issue?  Please open a case so they can start requesting logs and have the issue documented.

If you can msg or email me the case #  I can get our Snanpdrive dev/qa team to take a look at this issue.

  1. of replies to the post:


Discussion thread has 62 replies. Click here to read all the replies http://communities.netapp.com/message/47462#47462.

Original Post:

We're looking for feedback from  customers that have downloaded Virtual Storage Console and are using it. Any and all comments and feedback welcome.

Reply to this message by replying to this email -or- go to the message on NetApp Community

Start a new discussion in Virtualization at NetApp Community

keitha
6,256 Views

Carlo,

I am afraid this is a known issue with the Snapdrive VSS writer and the VMware VSS writer colliding. The workaround it to either uninstall the VMware VSS writer or not to use the VMware snapshots when doing VSC backups. This should not be a major problem though since the data needing quiecing should be in the RDM which will be backed up with Snapdrive and Snapmanager X.  In fact I don't recommend using the VMware snapshot very often, for most VMs they are not needed. Not using them eliminates many of the VSS hangups that can occur.

You did know though that rather than a RDM you can simply use a NFS mount with a VMDK in it with SnapDrive 6.3? That way you don't need ISCSI or a VMFS volume, it can all be done with VMDKs and NFS (Unless you are backing up Exchange I guess)

Keith

watan
6,111 Views

cgrossman
5,992 Views

Thank you, keitha.

Being a known issue, does this mean it will soon be fixed?  Isn't one of the main features of SMVI the ability to take a quiesced snapshot of a set of virtual machines?  If all of the VMs have VMWare VSS uninstalled, or VSC/SMVI is set to not take VMWare snapshots, and we're stuck taking non-quiesced snapshots, then isn't this the same as just scheduling the filer to take scheduled snapshots?

If I put the data in a VMDK in an NFS datastore, then won't VSC/SMVI back up that data as well?  I thought that I had seen that behavior, and won't that cause extra snapshots with my SD snapshot schedule?

What it's sounding like is that SD and VSC/SMVI are mutually exclusive.  I could take snapshots of everything with SD (even the VMDKs now with 6.3) and the SD VSS provider will quiesce the data and give me clean snapshots.  Or, I could use VSC/SMVI to take snapshots, and as long as I don't have SD installed, it will give me clean, quiesced snapshots.

SnapDrive Pros:

Control over which volumes are snapshot and can set different schedules for each.

Possible license cost savings over VSC/SMVI

Scriptable snapshots can mount the snapshot and record all data

All of the SD tools to resize NTFS volumes, reclaim space, create new drives/luns, etc.

SnapDrive Cons:

Cannot seem to mount a snapshot without a FlexClone license

Scheduling done by scripts and scheduled timers

Snapshots of a vmdk on an NFS share will snapshot the entire volume, containing non-quiesced vmdks in the same volume

VSC/SMVI Pros:

Quiesces all of the vmdks in the volume before taking a volume snapshot

Scheduling provided by a licensed product

Single file restore possible, however a little quirky in my opinion

VSC/SMVI Cons:

Loss of SD means all of SD functions must be done manually

SFR complexity

License costs

Snapshots seem to snapshot everything, so having different data scheduled at different times is not possible.

ryan_benigno
5,992 Views

The primary reason to use SnapDrive in the guest (and RDM's in general) is to allow application aware backup products to manage snapshots for specific applications (SnapManager for Exchange, SQL, etc).  Since SnapManager for the application is managing the snapshots, it is not necessary to perform VM consistent snaps of those guests with SM-VI.  Guests that don't manage their own snapshots can still benefit from SM-VI's VM consistent snaps.

So you end up with two schedules in SM-VI, one that does not take VM consistent snaps (covering VM with RDM's/or in-guest iSCSI) and one that takes VM consistent snaps (covering your other VM's).

acistmedical
5,992 Views

ryan_benigno wrote:

The primary reason to use SnapDrive in the guest (and RDM's in general) is to allow application aware backup products to manage snapshots for specific applications (SnapManager for Exchange, SQL, etc).  Since SnapManager for the application is managing the snapshots, it is not necessary to perform VM consistent snaps of those guests with SM-VI.  Guests that don't manage their own snapshots can still benefit from SM-VI's VM consistent snaps.

So you end up with two schedules in SM-VI, one that does not take VM consistent snaps (covering VM with RDM's/or in-guest iSCSI) and one that takes VM consistent snaps (covering your other VM's).

So what should i do if i back up using SMVI per Datastore, not VM. Is there a way to exclude VM's with RDMs from that SMVI backup? Or do have to stop doing Datastore backup and do VM schedules.

I'm asking because according to SMVI Best Practices it is better to do Datastore schedule as oppose to VM

ryan_benigno
5,992 Views

Unfortunately, individual VM's cannot be excluded in the current SMVI version.  Several of us mentioned this shortcoming elsewhere in the thread - hopefully NetApp is listening!

I use a separate datastore for guests with SnapDrive so that we can schedule SM-VI at the data store level.

acistmedical
5,946 Views

So what can be the worst implications if i just turn off VM snapshot option on this Datastore, which will impact all VMs that are on it?

ryan_benigno
5,946 Views

During a restore the OS volume these VM's would be crash consistent - like pulling the power cord on a server.  The data volumes for the VM's would be managed with SnapManager SQL/Exchange/etc (or scripted with SnapDrive), so you will still have app-consistent backups.  Worst case is a VM that won't boot, or is otherwise corrupt, but the app data is okay.  This, of course, assumes that the in-guest snapshot manager was setup properly and is doing it's job.  IMHO, the risk of corruption to the OS volume is pretty low when app data is on separate drives...

acistmedical
5,946 Views

But according to NetApp TR-3737:30  SnapManager 2.0 for Virtual Infrastructure Best Practices

"Another recommendation for these environments is to use physical mode RDM LUNs, instead of Microsoft
iSCSI Software Initiator LUNs, when provisioning storage in order to get the maximum protection level from
the combined SMVI and SDW/SM solution: guest file system consistency for OS images using VSS-assisted SMVI backups, and application-consistent backups and fine-grained recovery for application data using the
SnapManager applications. "

So if i use physical RDM for Data i can use SMVI for OS and leave VM snap on, so that should be ok?

BTW, this is for Exchange 2010. OS will be on NFS VMFS and Data drives on Physical RDM

ryan_benigno
5,945 Views

Sounds great, except several people in this thread have had issues trying to do it that way.

acistmedical
5,946 Views

I will give it a try and post and update.

Thanks

amritad
5,946 Views

Hi

IS there a case filed for RDM support that I can go take a look at?

REgards

Amrita

cgrossman
5,946 Views

@acristmedical - That was my thinking, however it isn't working for me.

@amritad - My case number is 2001874207

cgrossman
5,946 Views

What about this script?

http://communities.netapp.com/message/7148#41976

Will it get executed when SMVI commands the snapshot freeze/thaw operation?

amritad
7,232 Views

Following up on the case 2001874207. Will let you all know once I hear back.

REgards

Amrita

amritad
7,114 Views

I'm a little confused after reading the case. The case says you are trying to take a backup of RDM LUNs using SMVI. SMVI does not support RDM LUNs. SME should support RDM. 

http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=364605

REgards

Amrita

Public