But in Nick's write up http://datacenterdude.com/netapp/vsc-42-beta/ Under VSC Bug Fixes it mentions: Upgraded to VDDK 5.1 to support W2K12 VDDK 5.1 has support for newer versions of Windows operation systems including Windows Server 2008 R2 SP1 and Windows Server 2012. P&C and O&M both have a dependency on the VDDK, so they have both been upgraded to the newest version in order for VSC to support the latest Microsoft OS’s.
... View more
Hi, We are running the following: vSphere 5.1 update 1 SSO, VCENTER, SQL all on Windows 2012 Datacenter Server Data Ontap C-Mode 8.2 I've installed VSC, registered the plugin like normal, entered credentials into the Monitoring and Host Configuration, everything else works fine except Backup and Recovery. When I click on Backup and Recovery - Backup or Restore I'm presented with this error: If I look into the SMVI.log file I can see that this particular API call is failing: "[ERROR] Failure in jobList API call:" The SAN credentials being used currently have full admin access I've tried re-installing, i've tried running VSC on another server, i've removed IPv6 as per the documentation. Any ideas ?
... View more
excellent thanks Vinith that last one worked with the additional ?{$_."Volume Property Value" -eq "on"} You might guess my next question, I want to set all the volumes that have fs_size_fixed with a value of on to off, will the following work $volumes = Get-NaVol | select @{l='VolumeName';e={$_.name}},@{l='Volume Property Name';e={Get-NaVoloption $_.name | Where-Object {$_.name -eq "fs_size_fixed" -and $_.value -eq "on"} | select -ExpandProperty name}},@{l='Volume Property Value';e={Get-NaVoloption $_.name | Where-Object {$_.name -eq "fs_size_fixed" -and $_.value -eq "on"} | select -ExpandProperty value}} | ?{$_."Volume Property Value" -eq "on"} $volumes | Set-NaVolOption fs_size_fixed off
... View more
Hi Vinith, thanks for that we are almost there. As the output ends up listing on and off values. If you set one of your demo volumes fs_size_fixed to on and then run the same command you will see that the volumes that are set to on, will display <> under the column Volume Property Name and Volume Property Value. If possible I just want to show volumes with the on value ?
... View more
Hi, I've created the following Powershell command to sort through our volumes and their options to produce a list of volumes that have the fs_size_fixed value set to on. Get-NaVol | Get-NaVolOption | where{$_.Name -eq "fs_size_fixed" -and $_.Value -eq "on"} This currently works and produces the list below, however is there a way that I can list the volume name in one of the columns ? Name Value ---- ----- fs_size_fixed on fs_size_fixed on fs_size_fixed on fs_size_fixed on fs_size_fixed on fs_size_fixed on
... View more
Hi Irving, I haven't really seen any posts on here where the user upgraded to a specific version and their problems went away. I read through the long list of bug fixes for 8.1.2RC2 and it's a little concerning for us guys that are on earlier releases, all the resolved issues for system panics. I'd also like to hear from anyone that experienced system problems after upgrading to 8.1.x but have now upgraded to 8.1.2RC2 ?
... View more
I wouldn't worry about downgrading it's too late now I think, unless you want to involve Netapp. Especially if your RLW_Upgrading process is complete. What do you mean by you know that the lun is not aligned even though you created it with the -t vmware option ?
... View more
Thanks Peter you are right, and actualy if you want to specify 0 you can actually use -RetainUtmBackups -1 or NoUtmRestore, both these work as well.
... View more
Hi midvillanueva, we were in the exact same position. We had to turn all dedupes to manual (i.e. not run them at all), until we got all our alignment, aggregate/raid group layout corrected, etc We now only run dedupe on about 3 volumes. Any time dedupe kicks in we see a CPU spike, but latency does not seem to be that bad.
... View more
Hi Radek, I totally agree with you, besides some of the responses from users on this thread where Netapp did identify one or two issues that were being hit and their recommendation to upgrade to 8.1.1GA, we don't have much more than that from them besides following best practices.
... View more
I'm kinda of leaning towards the conclusion that any ontap version before 8.1 handled a "non-optimized" san much better than 8.1 does. 8.1 really lets you know if you have some issues, i'm still not completely ruling out that there could be some bugs in 8.1.x In regards to your list above, item 5, i've heard with 8.1.1 that this doesn't doesn't make any difference. However we still isolate SAS and SATA to different controllers. It's also better to have one large aggregate with multiple raid groups, than many smaller aggregates with only 1 raid group due to the controller being able to read from multiple raid groups at the same time. Remember that when you add new disks you need to run a reallocate -f -p /vol/volname which will spread out the data across the new disks. This is in priv set advanced mode.
... View more
Have you logged a fault with Netapp to grab some perfstats etc (first thing they will ask you, are all your vm's, disks aligned) ? If so let us know what they say. Are you using your SAN for virtual machines or dedicated luns, cifs shares etc ? How many aggregates do you have and how many raid groups per aggregate ?
... View more
Hi, I've come across an error today while using Snapmanager for Exchange 6.0.4 on Windows 2008 R2 with Exchange 2010 which results in error as soon as the backup tries to run. SnapDrive version is 6.4.1 User account on services has local admin access as well as admin access to the filer and is the same user account running the task The eventviewer produces this error log entry: Job : new-backup -Server 'EXCHANGE' -ManagementGroup 'Weekly' -BackupTruncatedLogs $False -RetainBackups 4 -RetainUtmBackups 0 -StorageGroup 'DB1','DB2','PF' -Verify -VerificationServer 'Exchange' -UseMountPoint -RemoteAdditionalCopyBackup $False The operation executed with the following results. Details: An Unexpected Error occurred while executing new-backup. Details: Cannot invoke this function because the current host does not implement it. Stack Trace : at System.Management.Automation.Internal.Host.InternalHostRawUserInterface.ThrowNotInteractive() at System.Management.Automation.Internal.Host.InternalHostUserInterface.PromptForChoice(String caption, String message, Collection`1 choices, Int32 defaultChoice) at System.Management.Automation.MshCommandRuntime.InquireHelper(String inquireMessage, String inquireCaption, Boolean allowYesToAll, Boolean allowNoToAll, Boolean replaceNoWithHalt) at System.Management.Automation.MshCommandRuntime.DoShouldContinue(String query, String caption, Boolean supportsToAllOptions, Boolean& yesToAll, Boolean& noToAll) at System.Management.Automation.MshCommandRuntime.ShouldContinue(String query, String caption) at System.Management.Automation.Cmdlet.ShouldContinue(String query, String caption) at SMEPSSnapin.SMEBackup.ProcessRecord() Stack Trace: at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate) at System.Management.Automation.PipelineNode.Execute(Array input, Pipe outputPipe, ArrayList& resultList, ExecutionContext context) at System.Management.Automation.StatementListNode.ExecuteStatement(ParseTreeNode statement, Array input, Pipe outputPipe, ArrayList& resultList, ExecutionContext context) Has anyone seen this before ?
... View more
Hi, you can force the scrub to continue in your off hours until a full scrub has completed. aggr scrub start /vol/<volume_name> Also how is your aggr's set up, i.e. What is your current system, what drives do you have in what aggregate, and currently how many raid groups to you have per aggregate. What does your storage contain, virtual machines, iscsi/fcp luns ? cifs shares, etc ? Lastly have you sent any perfstats over to Netapp for analysis ?
... View more
Hi Johny, I usually use snapmirror to copy the root vol from one aggregate to another, once the snapmirror is finished, I quiesce it, break it, remove the baseline snapshots, mark the vol as root which then marks the new aggregate as root. The either reboot the controller or issue a cf takeover and cf giveback
... View more
because with RAID-DP data is written in stripes. The stripes are across raid groups. To write a stripe across a 24 disk RG will take more time than to write the stripe across a 16 disk raid group
... View more
Depends what you want your storage for, to maximize storage space you would go with a large raid group size but will definately suffer performance, if you are after performance you would go with a smaller raid group size with the idea to create multiple raid groups for performance, though this comes at a price due to double parityin regards to usable space. Raid Group size of 16 for example would make the system work less hard due to the fact that it only needs to pass through 16 disks, as opposed to a raid group size of 24. We use raid group size 16 now because we run a 3240 with 600GB SAS, we have seen improvement in this area dropping for RG 24, with latency and throughput.
... View more
Yeah it's definately a massive thread, but very valuable information from everyone. There are optimal raid group sizes depending on your system and the disks you run in them, I have them for 3240 and 2040 if you need them, runs through SAS and SATA disks. If the RAID group size is too big, the system has to work harder to read through all the disks. For example we had raid group size 24, we now have raid group size 16 (actually at this moment we have an aggregate with 2 x 16 disk raid groups, once I migrate the last machines off a previous aggregate we'll end up with 3 x 16 disk Raid Group). We then run a reallocate job on each volume to spread the data across all RG's I don't think Netapp recommend having an aggregate with different raid group sizes, even though it can be done, it's not optimal.
... View more
actually I just checked all the releases where this bug is fixed, and it also include Ontap 8.1.1 RC1 which is what we are running. I hope the upgrade fixes your issue though. Do you know if you have optimal raid group sizes and all your disks/vm's aligned correctly ?
... View more
Hi Craig, thanks for posting the update. This is definately a long process to resolve this issue, we are currently stepping through each vm and making sure they are aligned, shortening our raid group sizes by creating new aggregates, migrating vm's and lun's, destorying existing aggr's and re-adding disks to the correctly size raid group aggr. We are currently down to 2 aggregates, 1 aggregate fully optimized, correct raid group size and every vm aligned correctly, the other aggr not the optimal raid group size and a few vm's still need to be aligned. This has taken about 3 weeks to get to this level, and we still experience high cpu. We are getting quite good throughput and lower latencies now, but cpu still jumps high, especially at night when all the backups kick in. We are still not running any dedupe jobs as well. When are you scheduled to upgrade to 8.1.1GA ? I'm eager to know if the GA will fix your issue.
... View more
Hi Craig thanks for your update. Have you applied this work around, did it solve the issue ? We are running 8.1.1RC1 and in the burt it says the issue has been resolved in 8.1.1RC1 and 8.1.1GA
... View more