VMware Solutions Discussions

VSC created datastore not available for backup

KUNKELMA_CUZ
24,533 Views

I created two NFS datastores using VSC and migrated servers to them, but when I went to VSC to create a backup job for them the new datastores are not available in the entities list. When looking at the NFS datastores in the "Storage Details - NAS" view, the Storage Status shows as "Offline" (althought the servers running on the datastores are working just fine). Under "Capacity" in that same view, it also reads "Volume is OFFLINE."

I've tried refreshing the volume scan a couple times, but it doesn't change the status and they still don't show up in the entities drop-down.

Any recomendataions are greatly appreciated.

1 ACCEPTED SOLUTION

KUNKELMA_CUZ
24,532 Views

The issue was tied to the volume name. It was changed on the filer, but there weren't any warnings regarding the NFS mount change on the VMware side. The NFS connection continued to work, even though the /vol/asdf.. path was no longer valid, but VSC couldn't see the path the way that VMware was still talking to it. The VMs running on it were shut down, removed from inventory, the datastore unmounted, remounted with the current /vol/asdf... path, the machines added back into inventory, and now they show up as available for backup.

View solution in original post

14 REPLIES 14

thearle
24,511 Views

Have you setup the controller configuration in the (SMVI)backup area, you have to do this separately for the SMVI component of VSC.  Also if you are using vFilers you need to talk to the vFiler directly.

KUNKELMA_CUZ
24,511 Views

Yes, we have been doing SMVI backups for several months now.

keitha
24,511 Views

I can see how thearle was confused. What does SMVI (backup and Recovery Plugin for VSC) show as the status of the datastore? Any update on this?

Keith

KUNKELMA_CUZ
24,512 Views

No, the problem persists. As I said in the original post, the status of the datastore shows as "offline," although it is fully functional as far as vSphere is concerned.    

keitha
24,512 Views

OK but you didn't answer the question about SMVI. I think the issue is with the VSC portion of the plugin but wanted to confirm if the SMVI portion also thinks there is a problem with the volume or if it is OK.

Keith

From: KUNKELMA_CUZ <xdl-communities@netapp.com<mailto:xdl-communities@netapp.com>>

Reply-To: <jive-1069580121-520-2-18r7@netapp-community.hosted.jivesoftware.com<mailto:jive-1069580121-520-2-18r7@netapp-community.hosted.jivesoftware.com>>

Date: Wed, 6 Jul 2011 12:04:58 -0700

To: Keith Aasen <keith.aasen@netapp.com<mailto:keith.aasen@netapp.com>>

Subject: "VSC created datastore not available for backup"

<http://communities.netapp.com/index.jspa>

Re: VSC created datastore not available for backup

created by KUNKELMA_CUZ<http://communities.netapp.com/people/KUNKELMA_CUZ> in Virtualization - View the full discussion<http://communities.netapp.com/message/58003#58003>

No, the problem persists. As I said in the original post, the status of the datastore shows as "offline," although it is fully functional as far as vSphere is concerned.

  1. of replies to the post:

Discussion thread has 4 replies. Click here<http://communities.netapp.com/message/58003#58003> to read all the replies.

Original Post:

I created two NFS datastores using VSC and migrated servers to them, but when I went to VSC to create a backup job for them the new datastores are not available in the entities list. When looking at the NFS datastores in the "Storage Details - NAS" view, the Storage Status shows as "Offline" (althought the servers running on the datastores are working just fine). Under "Capacity" in that same view, it also reads "Volume is OFFLINE." I've tried refreshing the volume scan a couple times, but it doesn't change the status and they still don't show up in the entities drop-down. Any recomendataions are greatly appreciated.

Reply to this message by replying to this email -or- go to the message on NetApp Community<http://communities.netapp.com/message/58003#58003>

Start a new discussion in Virtualization by email<mailto:message-community-products_and_solutions-virtualization@netapp-community.hosted.jivesoftware.com> or at NetApp Community<http://communities.netapp.com/choose-container!input.jspa?contentType=1&containerType=14&container=2160>

Stay Connected: <http://www.facebook.com/NetApp> Facebook<http://www.facebook.com/NetApp> <http://twitter.com/NetApp> Twitter<http://twitter.com/NetAppCommunity> <http://www.linkedin.com/groups?about=&gid=111681> LinkedIn<http://www.linkedin.com/groups?about=&gid=111681> <http://www.youtube.com/user/NetAppTV> YouTube<http://www.youtube.com/user/NetAppTV> <http://communities.netapp.com/index.jspa> Community<http://communities.netapp.com/index.jspa>

© 2011 NetApp<http://www.netapp.com/us/site/copyright.html> | Privacy Policy<http://www.netapp.com/us/site/privacy.html> | Unsubscribe<http://communities.netapp.com/user-notification-preferences!input.jspa> | Contact Us<mailto:xdl-communities@netapp.com>

495 E. Java Drive, Sunnyvale, CA 94089 USA

KUNKELMA_CUZ
24,533 Views

The issue was tied to the volume name. It was changed on the filer, but there weren't any warnings regarding the NFS mount change on the VMware side. The NFS connection continued to work, even though the /vol/asdf.. path was no longer valid, but VSC couldn't see the path the way that VMware was still talking to it. The VMs running on it were shut down, removed from inventory, the datastore unmounted, remounted with the current /vol/asdf... path, the machines added back into inventory, and now they show up as available for backup.

yadav516361
24,511 Views

Hi,

Can anybody have an idea on the below error given when i am trying to create NFS datastore using VSC?

       DatastoreSpec datastoreSpec = new DatastoreSpec();

       datastoreSpec.setAggrOrVolName("aggr0");  (or)  datastoreSpec.setAggrOrVolName("/vol/voltest_003");

       datastoreSpec.setSizeInMB(2048L);

       datastoreSpec.setThinProvision(true);

       datastoreSpec.setVolAutoGrow(true);

       datastoreSpec.setVolAutoGrowInc(1024L);

       datastoreSpec.setVolAutoGrowMax(4096L);

       datastoreSpec.setProtocol("NFS");

       datastoreSpec.getDatastoreNames().add("NewNFSDatastore");

       datastoreSpec.setController(controllerSpec);

       datastoreSpec.setTargetMor(targetMoref);

Output:

There has been aSOAP error. Please check the log.java.lang.IllegalArgumentException: Avolume, aggregate, or storage service name is required.

RCU API returnednull

Reply me ASAP.

Thanks in Advance

costea
24,511 Views

This question on the API should probably be posted to a separate thread.  It looks like you are using a later version of the API but have not updated your client side source.  In the latest version, the setAggrOrVolName method has been changed to setContainerName.

yadav516361
24,511 Views

Hi Costea,

I have already posted this question on the seperate discussion, but i did not get any response for this question. Sorry if did any mistake, this is very urgent requirement for me.

Where can i get the latest api which will support setContainerName() method to create a datastore? Can you please help me out on this?

costea
23,150 Views

What version of VSC are you running?  If you upgrade to VSC 2.1 and re-generate your client side classes, you will see this method change.

yadav516361
23,150 Views

I am suing RCU v3.0. Is this method not available in this version?

costea
23,150 Views

The version you claim to be using does not match the error message you reported.  RCU 3.0 throws the following error message: "A volume or aggregate name is required."  This was changed in VSC 2.0.1 to "A volume, aggregate, or storage service name is required." as you posted above.  The other change was the modification to the DatastoreSpec object to replace setAggrOrVolName to setContainerName.  If you regenerate your client side classes, you will see the method name change.

yadav516361
23,150 Views

Yah i got that  method setContainerName(). But when i executed the API it giving an exception,  Error: Exception in resize. Invalid volume name: '' (errno=13044). What is the reason for getting this exception?

costea
23,150 Views

How did you set it?  You have to specify it like so: spec.setContainerName("aggrName")

Public