Active IQ Unified Manager Discussions
Active IQ Unified Manager Discussions
My customer is using Provisioning Manager (via Ops Mgr 3.8) to provision storage in a multistore environment. They have a dataset created that contains a single volume and a single qtree within that volume. When they try to add more qtrees to this volume via the Provision Storage option, Provisioning Manager creates a new volume by the name of <dataset>_1 instead of just adding the qtree to the existing volume named <dataset>.
Here are the dataset specifics:
dfpm dataset list -x nv_dochaload2p_n01ora1_nosnap
Id: 131661
Name: nv_dochaload2p_n01ora1_nosnap
Policy:
Provisioning Policy: ThickNAS_nosnap
Resource Pools: eg-nasprod-e02
vFiler: prod-ecom-e0057.westlan.com
Description: PP06584
Owner:
Contact:
Volume Qtree Name Prefix:
DR Capable: No
Requires Non Disruptive Restore: No
Export Protocol: nfs
NFS Protocol Version: v3
Disable setuid: 1
Anonymous Access UID: 0
Read-only Hosts: None
Read-write Hosts: 1.1.1.1
Root-access Hosts: None
Security Flavors: sys
This is what we see in the conformance log file when we try to provision a new qtree for this dataset:
Apr 21 10:21:01 [dfmserver:DEBUG]: Thread 0xe77f2ba0: ProvisioningChecker: 1 provisioning job(s) found.
Apr 21 10:21:01 [dfmserver:DEBUG]: Thread 0xe77f2ba0: Processing New Storage Request...
Apr 21 10:21:01 [dfmserver:ERROR]: Thread 0xe77f2ba0: Failed to select volume during provisioning
Apr 21 10:21:01 [dfmserver: INFO]: Thread 0xe77f2ba0: Dry run: action: Selected aggregate eg-nasprod-e02:aggr2 for provisioning new volume.
I see the "Failed to select volume during provisioning" message, but no details as to why it failed to select the existing volume for this qtree to be created in. Any ideas? Thanks!
Mike
Hi
Can you kindly provide the job details output for the provisioning request where a new volume was created.
dfpm job details ”
Regards
Sharaf
Here you go:
dfpm job detail 36090
Job Id: 36090
Job State: completed
Job Description: Provision a storage container 'n01oradata1' of size 20.0 GB.
Job Type: provision_member
Job Status: success
Dataset Name: nv_dochaload2p_n01ora1_nosnap
Dataset Id: 131661
Object Name: nv_dochaload2p_n01ora1_nosnap
Object Id: 131661
Started Timestamp: 20 Apr 2010 09:40:30
Abort Requested Timestamp:
Completed Timestamp: 20 Apr 2010 09:41:44
Submitted By: u0019666
Provisioning Policy Id: 96158
Provisioning Policy Name: ThickNAS_nosnap
Provisioned Container Name: prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1/n01oradata1
Provisioned Container Id: 131671
NFS Export: prod-ecom-e0057.westlan.com:/vol/nv_dochaload2p_n01ora1_nosnap_1/n01oradata1
Job progress messages:
Event Id: 7738
Event Status: normal
Event Type: job-start
Job Id: 36090
Timestamp: 20 Apr 2010 09:40:30
Message:
Error Message:
Event Id: 7739
Event Status: normal
Event Type: volume-create
Job Id: 36090
Timestamp: 20 Apr 2010 09:40:40
Message: Provision flexible volume of size 20.0 GB with space guarantee set to "volume"
Error Message:
Volume Id: 131668
Volume Name: eg-nasprod-e02:/nv_dochaload2p_n01ora1_nosnap_1
Event Id: 7740
Event Status: normal
Event Type: vfiler-storage-add
Job Id: 36090
Timestamp: 20 Apr 2010 09:40:54
Message: Successfully added flexible volume 'prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1'(131668) to vFiler unit 'prod-ecom-e0057.westlan.com'(131626)
Error Message:
vFiler Name: prod-ecom-e0057
vFiler Id: 131626
Volume Name: prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1
Volume Id: 131668
Event Id: 7741
Event Status: normal
Event Type: qtree-create
Job Id: 36090
Timestamp: 20 Apr 2010 09:41:02
Message: Successfully provisioned qtree n01oradata1 on volume 'prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1'(131668)
Error Message:
Qtree Id: 131671
Volume Id: 131668
Volume Name: prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1
Qtree Name: prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1/n01oradata1
Event Id: 7742
Event Status: normal
Event Type: volume-option-set
Job Id: 36090
Timestamp: 20 Apr 2010 09:41:02
Message: Successfully set volume options.
Error Message:
Volume Id: 131668
Volume Name: nv_dochaload2p_n01ora1_nosnap_1
Volume Options: convert_ucode=on,create_ucode=on
Event Id: 7743
Event Status: normal
Event Type: snapshot-reserve-resize
Job Id: 36090
Timestamp: 20 Apr 2010 09:41:20
Message: Successfully set snap reserve for volume 'prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1'(131668) to 0 percent.
Error Message:
Volume Id: 131668
Volume Name: prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1
Event Id: 7744
Event Status: normal
Event Type: quota-set
Job Id: 36090
Timestamp: 20 Apr 2010 09:41:25
Message: Set Tree Quota = 20.0 GB
Error Message:
Qtree Id: 131671
Qtree Name: prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1/n01oradata1
Tree Quota: 20.0 GB
Event Id: 7745
Event Status: normal
Event Type: nfsexport-create
Job Id: 36090
Timestamp: 20 Apr 2010 09:41:44
Message: Export qtree QtreeToBeProvision-nv_dochaload2p_n01ora1_nosnap over NFS.
Error Message:
Qtree Id: 131671
Qtree Name: prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1/n01oradata1
NFS Export Path: prod-ecom-e0057.westlan.com:/vol/nv_dochaload2p_n01ora1_nosnap_1/n01oradata1
Event Id: 7746
Event Status: normal
Event Type: job-end
Job Id: 36090
Timestamp: 20 Apr 2010 09:41:44
Message: Provisioned volume prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1 (131668). Provisioned qtree prod-ecom-e0057:/nv_dochaload2p_n01ora1_nosnap_1/n01oradata1 (131671). Created NFS export prod-ecom-e0057.westlan.com:/vol/nv_dochaload2p_n01ora1_nosnap_1/n01oradata1.
Error Message:
Hi
Thanks for the job details
My previous post did not have the complete information which I had posted
- From other posts in this thread, just wanted to confirm that the first volume in dataset was created by provisioning manager. If it was not then as discussed there is an RFE for provisioning on imported volumes
The following information would be helpful if it is not an imported volume.
- If it was created by provisioning manager then were the job details provided for the first volume provisioned or was it for the provisioning job where a new volume was created instead of creating a new qtree in the existing volume. I could not see any failures or retry messages in the job details provided. Actually job details were wanted for the second provisioning job
- Is the new volume ( for second provisioning job ) created on a different aggregate than the previous volume. If yes there might be some problems in resizing the earlier volume when provisioning a new qtree
- Another point is that was the previous volume made offline , restricted or some parameters changed which is not in accordance with the policy. Is the dataset conformant
You can also get the output of “dfpm dataset conform –D <data_set>”
- You had also mentioned that there was a crash at the end of provisioning job. Can you please file a BURT for it
Regards
Sharaf
Sharaf,
Sorry for my delayed response on this. The job details I provided were for the 2nd provisioning job. Evidence of this is that it is going to create a volume with a "_1" at the end as part of this job. The new volume was not created on a different aggregate - it is created in the same aggregate as the initial volume.
I'm not getting any valid output from dfpm dataset conform -D:
dfpm dataset conform -D 131661
Dataset dry run results
----------------------------------
No dry run results available.
dfpm dataset conform -D nv_dochaload2p_n01ora1_nosnap
Dataset dry run results
----------------------------------
No dry run results available.
We already have a burt open for the issue that is causing crashes - burt 399012.
The conformance log tells us that it "Failed to select volume during provisioning". It would be nice if Prov Mgr actually logged the reason for why this is, instead of just noting the failure! Thanks,
Mike
I'm pretty sure ProtMgr can't use volumes that were created outside PM (pre-operations manager)...it can only provision from the volumes it creates.
This could be due to one of the reasons mentioned below.
1.If this volume was imported into the dataset,
Then further provisiong will not use this volume as it was not created by Provisioning Manager.
But still it will apply all provisioning policy setting of the dataset like snapautodelete, volume autogrow and enabling dedupe.etc.
2.If there is name conflict between the qtrees like, for example
The first provisioning request was to create a qtree named qt1 in the dataset ds1.
then provisioning manager will create a volume named ds1 and create a qtree named qt1 inside the volume.
If the second provisioning request was to create a qtree named qt1 again then to over come the ontap restrictions of not allowing to create qtrees with same name in one volume,
Provisioning Manager will create one more volume named ds1_1 and then create the qtree named qt1. to disambiguate it.
Regards
adai
This particular volume was created from within Prov Mgr, but my customer has many volumes that were not created within Prov Mgr and need to be imported with the ability to provision future qtrees in that same volume. Is there some kind of workaround to get Prov Mgr to recognize these imported volumes such that they can be used for future provisioning? If not, there needs to be! Thanks,
Mike
Can you get the output for the following command?
dfpm dataset get ?
Is there some kind of workaround to get Prov Mgr to recognize these imported volumes such that they can be used for future provisioning? If not, there needs to be!
The answer is No today, but there are request from customers to do so.
Pls add your customer to the already existing RFE for the same.
Now coming back to your question, if it was not case 1 was it case 2 as I mentioned in my previous post?
Regards
adai
dfpm dataset get nv_dochaload2p_n01ora1_nosnap
Allow custom volume settings on provisioned volumes: No
Enable periodic write guarantee checks on SAN datasets: Yes
What is the RFE number for the ability to provision on imported volumes?
Case #2 does not apply here. I'm told by the customer though that Operations Manager crashed near the end of the initial dataset creation, so it is possible that something in that process did not finish correctly even though it appeared to complete.
Thanks,
Mike
The only RFE I know of is the one that I submitted a few weeks ago: 407474
Case #2 does not apply here. I'm told by the customer though that Operations Manager crashed near the end of the initial dataset creation, so it is possible that something in that process did not finish correctly even though it appeared to complete.
After some logging and investigation we found that this is due to the sever crash.
Provisioning Manager did not update itself that this volume was provisioned by it.
So this case is due to the fact the Prov-Mgr things that this volme is not managed by it.
Regards
adai