This is a filed problem under BURT 1029694. Use direct path to access the docs on the dowloaded files: /Downloads/NetApp_Manageability_SDK_56_Reference_Manual_for/doc/ontapi/ontapi_1.110/Vserver/index.html or /Downloads/NetApp_Manageability_SDK_56_Reference_Manual_for/doc/ontapi/ontapi_1.110/Cluster-Mode/index.html
... View more
Hi Ruben I have no real openstack knowhow, but the call looks like a 7-Mode call to me and you are saying you run on cDOT. Anything you can change there? <netapp xmlns="http://www.netapp.com/filer/admin" version="1.31" vfiler="vsiscsi"><lun-map><path>/vol/openstack_vol01/volume-babd5700-2ebb-48c6-ae27-667ba167b209</path><initiator-group>openstack-573d9b2b-ae32-457f-8227-119707531793</initiator-group></lun-map></netapp>' The response that we get is: NetApp API failed. Reason - 13003:Insufficient privileges . Regards Christoph
... View more
Perfect Jeremy. I just changed the script to call Get-WfaCredentials instead of Get-NaCredentials and data source was back working. For others with the same problem: go to the designer - click data source types - right click your data source and choose edit. Then adjust the script in question.
... View more
Hi all, I upgrade my WFA to 2.1RC and restored the database. I had powershell script to cache cDOT data working with WFA 2.0. Now, I'm keeping to get the following error on data acquisition: Error getting data source credentials. Please see the log file for more details When I check the log file I find: 23:19:49,453 ERROR [com.netapp.wfa.command.execution.instance.impl.ExecutionInstanceDaoImpl] (http-executor-threads - 15) cDOT James:Error getting data source credentials: System.Management.Automation.CommandNotFoundException: The term 'Get-NaCredentials' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. Powershell 2.0 is running on the WFA server: PS H:\> $PSVersionTable.psversion Major Minor Build Revision ----- ----- ----- -------- 2 0 -1 -1 I also removed and added the source credentials of the controller in question. Anybody?
... View more
To give feedback on my investigations. Due to variable blocksize in NBU, the mentioned savings are about what you can expect. A smarter combo would be Syncsort/FAS, Commvault/eSeries or SnapProtect/FAS.
... View more
Hi all, IHAC who is using a NetApp FAS as a disk target for Symantec NetBackup. He is saving about 20% by dedupe and about 40% by compression, both on FAS. The use case is to save the expensive dedupe license in NBU. Are there any tunings to maximize the dedupe space savings? Like adjusting the fragment size (it is already divideable by 4KB) on the NetBackup storage pool? Thanks Christoph
... View more
Hi Jeremy, This is great information! Do the new versions have extended vFiler functionalities? I mean with the drop of proper vFiler support in Systems Manager, this could be the solution. E.g. I changed your workflow in order vFilers and Shares are populated in drop-down fields. Can provide you the SQL snippets if necessary. Thanks a lot Christoph
... View more
Hi Yaron My problem is not upgrading WFA. My problem is to import "old" workflows (like the Pirate Pack for NAS) into a net new installation of WFA 2.0. However, with the workaround to use an WFA 1.1.1 installation - I can import/reexport the workflows in order to import it finally in 2.0. Thanks Christoph
... View more
Ok, here is what I did. I imported the workflows (available in v1 only) into WFA 1.1.1. From there exported the required workflows again. That version was ok to import in WFA 2.0. Thanks
... View more
Hi all, is it possible to do that? I'm getting the following error: Incompatible DAR - no upgrade path were found. DAR version '1.0.0.3.10' , installed version '2.0.0.391.2' Thanks Christoph
... View more
Hi all, Please open up support cases for this. I believe this is BURT498695 and has no single support case attached to it. If they have no support cases, it's obviously no problem and does not need fixing. I believe otherwise. Thanks Christoph
... View more
I'm fully aware that the dedupe will be away after QSM. We will dedupe it on the destination too. I'm not able to follow your logic in 2. I believe that the dedupe limit is based on the fingerprint db, that cannot hold changings larger than a certain amount an not the internal volume structure. I believe this question needs to be answered by product ops.
... View more
Thanks for your prompt replies. I think I have to rephrase my question. Here I what my customer tries to do: 1. QSM a 14TB volumt to 64bit aggr 2.Dedupe (potentially also compress the new vol) -> now the question, what happens if the volumes needs to grow beyond 16TB (on DOT 8.0.x). can we disable dedupe/compression and just expand it to lets say 20TB?
... View more
For DOT 8.1 the max vol sizes for dedupe/compression equals the max size of the volumes. So problem away. My question is about DOT 8.0.2x land only.
... View more
Hi all, If I have a volume as big as 16TB which is deduped, can I transfer that volume to a 64bit aggr (with dedupe enabled) and grow that volume later beyond 16TB (with dedupe disable) without undoing the deduplication? Thanks Christoph
... View more
Hi all, Is there an "official" WFA logo? We had a joint presentation with Cisco that included WFA. The presenter from Cisco requested a WFA logo. I did not find one and created the following. Hope you like it. Christoph
... View more
Hi Bernhard, you are certainly not the only Metrocluster customers. There are thousands of such clusters running world-wide. You don't mention the controller model you are using. From my expirience a single VIF gives you more availabiltiy and more stabiltiy as there are no special switch configs necessary (and to be honest, there we see a lot of issues). On top of that, depending on your hosts and network LACP might not offer you the expected load-balancing. Also, in most cases one 10Gbit pipe is enough in terms of performance. The controller failover on front-end network failure is working quite stable. However, I usually don't recommend it to a customer. Most of them don't wanna failover a storage cluster "just" because of an broken network link. This should be handled by the redundancy on the network side (e.g. single VIF). Regards Christoph
... View more
Some news here: Somebody filed a bug at opensolaris.com that got closed with the reason "not a bug". However, it seems to be obvious that ZFS won't perform on any SAN storage. Check: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6978878 In addition NetApp Engineering is working on the topic. Outcome will be posted on tje following page. However, it seems that NetApp cannot do anything about it. Check: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=407376 Here is my personal view: ZFS is built for physical disks, don't use it on top of SAN storage. Besides of it our Snap* Products don't support it.
... View more
Guys, this should be possible with this: http://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb9540 I'havent yet the time to try this in the lab. This could be automated by using the powershell plug-in. Christoph
... View more