Only way to do this is to provision a new volume in 7-mode and use a client operating tool to copy backwards (robocopy, rsync)
I would recommend doing everything you can to stay on cDOT at this moment
... View more
Appreciate the response.. But we need netapp to address these issues.
We attempted to use RESTAPI to dump DAR files of the workflows with UUID's, and we were going to upload to our GIT, but the RESTAPI didn't even work.. We might open a case and escalate the issue.
... View more
We are just trying to streamline automation of DAR file exports for backup. This is because of the Zombie Worklflow we encountered (See other thread)
The restapi and curl commands don't return any data.
curl -X GET --header 'Accept: text/plain' 'https://server/rest/dars'
^doesn't work at all
curl -X GET --header 'Accept: application/x-7z-compressed' 'https://server/rest/dars/846fceb9-5e06-4848-bfa5-3389f24c0d48
'
^^ UUID provided
Error is no match for accept header?
Has anyone worked on scripting dar file exports with RESTAPI
... View more
We are still facing get workflow failure after editing a workflow and now the workflow has completely become 'zombied'.
We cannot open it / edit it, and basically can't get it back.
Anyone having these issues?
... View more
Ok, here is our query the team has wrote, and could use some advice on this.
SELECT
cv.id,
cv.name AS 'name',
cv.used_size_mb AS 'used',
cv.size_mb AS 'total_size',
vserver.name AS 'vserver.name',
cluster.primary_address AS 'vserver.cluster.primary_address',
(((SELECT
sum(qtree.disk_limit_mb)
FROM
cm_storage.qtree
WHERE
qtree.volume_id = cv.id )/cv.size_mb)*.5 + (cv.used_size_mb/cv.size_mb)) AS 'Averaged'
FROM
cm_storage.volume cv,
cm_storage.vserver,
cm_storage.cluster
WHERE
cv.security_style = '${security_style}'
AND cv.used_size_mb / cv.size_mb < .90
AND cv.name LIKE '${VolPrefix}%'
AND vserver.id = cv.vserver_id
AND cluster.id = vserver.cluster_id
AND vserver.name NOT LIKE '%_dr'
AND (
cv.size_mb * 2 > (
SELECT
sum(qtree.disk_limit_mb)
FROM
cm_storage.qtree
WHERE
qtree.volume_id = cv.id
)
OR cv.used_size_mb = 0
)
ORDER BY
Averaged ASC
We have to do same subquery 2x to reference.
For reservations we had to go to commands we defined mandatory variables
[parameter(Mandatory=$false, HelpMessage="Disk Limit")]
[long]$disk_limit_mb,
[parameter(Mandatory=$false, HelpMessage="Disk Soft Limit")]
[long]$disk_soft_limit_mb
Added parameter defintions, add parameter mappings, and then in reservation tab
added this
'${disk_limit_mb}' AS disk_limit_mb, -- Default value for disk limit, soft disk limit and filelimit is 0
'${disk_soft_limit_mb}' AS disk_soft_limit_mb,
Now that updates the reservation table.
Thanks for checking our query
... View more
Joele,
Yes, that would also be helpful. We will share ours shortly and see what optimizations can be done.
We have figured out the reservation cache as well, just need to adjust our query
... View more
^^
Nice powershell.. Yeah, powershell is always easier, and we have mocked up code in powershell pretty easy. We need to use WFA as we plan on sending REST calls from ServiceNow front-end to our customers.
for your convert to size, you can use Netapp's builtin convertto-formattednumber cmdlet. That might do the trick as well.
... View more
^^
That would be great. We have working queries, but we are not happy with the timesync of WFA-OCUM and reservation data that isn't updating, so we are hoping to have a query that updates the reservation cache.
... View more
We noticed the same thing as well when we were testing out 4.2RC1. We then tried to start the services with a domain user and it went super fast.
Can you test out that scenario as well to confirm?
... View more
Nick,
I should have been more clear, i apologize if it was ambiguous.
So, we are doing deployment of what we are terming "general purpose" requests under 3TB to a dedicated utility volume.
So for example, the cluster will have 4 utility volumes. As a request comes in, WFA calculates all the quotas committed, vol size used and then makes decisions on which volume to provision a new qtree with new quota. That is where we are heading.. Our DEV guys have made lots of progress on this in the past couple of days, by two methods, java or sql.
The challenge we are running into now is with the reserveation database.
Anything over 3TB, will get a dedicated volume. The reason why we chose this design method is because of cluster vol limits and we realized that our provisioning was 80/20 rule. Meaning 80% of our vols were <3TB. Our Util vol starts at 10TB with autogrow max at 20TB.
We also feel this will help out more with SVM-DR and mirror'd volume limits.
I will gladly take some feedback and any info on forcing our commands into the reservation database would be great.
Thanks
... View more
Sorry i read it fast and it came across as, please write these for me.
Depending how much you provision WFA might not be worth it, and you can do whatever you want in powershell
... View more
This is a community where we try to assist others, but we cannot completely write scripts for people, especially a complete script. The 'user' needs to have some skin in the game for me.
I can get you started, but you have to put some effort in yourself
What your asking for is very easy to do in powershell and even easier to do with WFA
You want a csv file as an import with your header fields.
Start looking at get-nchelp or show-nchelp.
I will get you started with volume name
name junction aggr size
vol1 /vol1 aggr1 10g
$vols - this will represent the above.
$vols = import-csv createvols.csv
$vols | % {
new-ncvol -name $_.name -aggregate $_.aggr " -junctionpath $_.junction $_.size -spacereserve "none"
}
That's a for loop iterating thru your array.
You can start with that. Powershell is not intimidating once you get the basics..
... View more
Andrew -
I agree with your statement and I'm aware of of the security login approach for sure.
So, is it safe to assume that all tools (OCUM, PS) they default to https?
... View more
We are working on workflows where the end state goal is to have WFA do the compiling of quotas on our utility volumes to check for oversubscription and then chose the proper volume.
Has anyone done something similiar to this? If so, please share experiences.
The thought process is pull Volumes used size, total size and committed quotas and then go from there.
Any insight would be great.
Thanks
... View more
I'm just trying to understand if we need to enable the http servers (system services web show)
or can we just give the privileged account access via security login with Ontapi.
Any documentation link on this would be great.
Thanks
... View more
I don't think i'm following it because we do this without issue. So if you are running in DR, you need to stop the vserver in DR and then failback. Maybe you can post your error from CLI here
... View more
Physical servers would be great, but cost prohibitive. We have a very large file services environment in PBs and all our vscan pods are virtual
... View more