Hi Adai Yes Dedupe is enabled and the aggr was from beginning a 64bit aggr... Which value I have to set for disabling the hidden Value pmAutomaticSecondaryVolMaxSizeMb (disable/off/0). I wanna test with a temp Dataset without Dedupe to see what happen in this case.... regards Thomas
... View more
Hi Matt First Thanks for your response... You are right the value of svip is a text. This is becaue i wan't publish Names or IPs from the customer in a public Forum :-)... This behavior I see at 2 Custer Site with VSC 4.1 and sv.smvi 3.0.3.. Script: "C:\Program Files\NetApp\Virtual Storage Console\smvi\svsmvi\sv-smvi.exe" -svip 10.5.9.89 -svuser vscadmin -svcryptpasswd 53616c7465645f5fcf897643fa6fe4a76acaa1c188176b6b06cc174b90e620a3 -verbose -report -reportdir C:\Temp\SV-SMVI_reports\sv-smvi_STV0001 Path: C:\Program Files\NetApp\Virtual Storage Console\smvi\server\scripts We have to use svip. We have try to use the parameter -dnslist, but this doesn't, because we use IP Adresses and not DNS Names for mounting ESX Datastores. The other thing to say again ist, that this Problem has no functional impact. Is only a consmetic Issue, but the Windows Admins doesn't like Warnings in the Event Log :-)... regards Thomas
... View more
Hi Adai Now I have tested the dfpm reslimit set, but it doesn't work with this setting (Can't conform Dataset): U:\>dfpm reslimit set 1063 maxDedupeSizeInGB=81920 Modified resource limit (1063). U:\>dfpm reslimit get 1063 Id 1063 ONTAP Version 8.1.1 Product Model FAS3160 Availability None Maximum number of FlexVols per storage controller Maximum CPU utilization threshold of storage controller Maximum Disk utilization threshold of an aggregate Maximum Deduplication size of a storage system model and ONTAP version (in GB) 81920 With the hidden Option it works: U:\>dfm options set pmAutomaticSecondaryVolMaxSizeMb=41943040 Changed auto-provisioned secondary volume max size, in megabytes to 41943040. I think its not a Problem with the dedupe Limit, because the Nonconformant Event say: "select a size of at most 44.9TB for the new Volume" This is the size of the aggr before adding Disk and not the dedupe limit of 50TB (defined with dfpm reslimit set).. I think I leave this hidden option and the next step is upgrading to 5.1.. Thanks for your help and regards Thomas
... View more
What do you think is the more prober Workaround: Dedupe Limit to 80TB or pmAutomaticSecondaryVolMaxSizeMb to 40TB or it doesn't matter IMHO: My preferred Workaround is the option pmAutomaticSecondaryVolMaxSizeMb regards Thomas
... View more
Hi Adai We will upgrade soon to 5.1. We have to test the ha-config-check.exe before upgrade, because its not anymore integrated in DFM and ConfigAdvisor is not yet supportet with metrocluster 😞 Yes this aggregates are 64bit. Following the workaround: 1) Identify the resource limit Id for the ONTAP version and Platform. #dfpm reslimit get 2) From the above output identify the reslimit Id and update the resource limit of the 64bit de-dupes for that particular platform and ONTAP version # dfpm reslimit set <reslimit> maxDedupeSizeInGB=<value> But I have did this already I think: U:\>dfpm reslimit get 1063 Id 1063 ONTAP Version 8.1.1 Product Model FAS3160 Availability None Maximum number of FlexVols per storage controller Maximum CPU utilization threshold of storage controller Maximum Disk utilization threshold of an aggregate Maximum Deduplication size of a storage system model and ONTAP version (in GB) 51200 Which correspond with our Nearstoresystem NetApp Release 8.1.1P1 7-Mode: Tue Aug 21 16:54:32 PDT 2012 Model Name: FAS3160 And the dedupe Limit from the Documentation is set correct: FAS3160 ONTAP 8.1.x 50TiB I dont understand, what is to do know. Have I to increase the dedupelimit from dfpm reslimit to 74.5T (aggr size)? I have found another hidden option called pmAutomaticSecondaryVolMaxSizeMb. Can I set this temporary to fe 40TB till we upgrade to 5.1? Greetz and Thanks Thomas
... View more
Hi All Following Problem: - Two Aggregates (aggr3/aggr4) have increased by adding Disks - This 2 aggregates (aggr3/aggr4) are member of a Ressourcepool. - Today We have put a new Qtree in a Dataset which has attached this Ressourcepool. While Provisioning want to create a Mirror Volume following "Nonconformant" Error occurs: -When I remove the 2 aggregates,which have increased (aggr3/aggr4) from the Resourcepool, then Provisioning works without Problems: On this Nearstore System I have to add manually a dfpm reslimit after a Ontap Upgrade (8.1.1): U:\>dfpm reslimit get 1063 Id 1063 ONTAP Version 8.1.1 Product Model FAS3160 Availability None Maximum number of FlexVols per storage controller Maximum CPU utilization threshold of storage controller Maximum Disk utilization threshold of an aggregate Maximum Deduplication size of a storage system model and ONTAP version (in GB) 51200 Can someone explain me this behaviour.... Thanks a lot and regards Thomas
... View more
Hi All I have the following "little" Problem (is a cosmetic not a functional Issue) with VSC 4.1 and sv-smvi 3.0.3. Environment: Windows 2008 R2 SP1 VSC 4.1 vcenter Server 5.1 sv-smvi 3.0.3 - VSC Backup Job runs without errors. - sv-smvi script runs witout Errors: LOG REPORT FOR SV-SMVI ----------------------------------------------------- [21:01:06] SV-SMVI Version: 3.0.3 [21:01:06] Log Filename: C:\Temp\Reports\SV-SMVI_20121126_210106.log [21:01:06] Start Time: Mon Nov 26 21:01:06 2012 [21:01:06] Using -svschednames, only taking SnapVault snapshots on specific schedule names ... [21:01:06] Saving SnapVault schedule name sv_smvi ... [21:01:06] Use of -svip overrides IP address(es) in SMVI post-backup output. Continuing. [21:01:06] Found backup: HOST = 'IP Adress', VOLUME = 'DataVol', SNAPSHOT = 'smvi__VMware Daily_recent' ... [21:01:06] Use of -svip: Changing HOST = 'IP Adress' to HOST = 'Override IP Adress' ... [21:01:06] Preserving SMVI backup with storage controller Override IP Adress, volume DataVol, snapshot named smvi__VMware Daily_recent ... [21:01:06] Found backup: HOST = 'IP Adress', VOLUME = 'OsVol, SNAPSHOT = 'smvi__VMware Daily_recent' ... [21:01:06] Use of -svip: Changing HOST = 'IP Adress' to HOST = 'Override IP Adress' ... [21:01:06] Preserving SMVI backup with storage controller Override IP Adress, volume OsVol, snapshot named smvi__VMware Daily_recent ... [21:01:06] Initializing connectivity to storage controller ... [21:01:06] Attempting to ping storage controller Override IP Adress ... [21:01:06] Ping of storage controller Override IP Adress successful. [21:01:06] Logging into storage controller Override IP Adress ... [21:01:06] Setting username and password for storage controller Override IP Adress ... [21:01:06] Testing login by ONTAP version from storage controller Override IP Adress ... [21:01:06] ONTAP version: NetApp Release 8.0.2P6 7-Mode: Fri Jan 27 14:48:08 PST 2012 [21:01:06] Storage appliance login successful. [21:01:06] Looking for snapshot smvi__VMware Daily_recent on controller Override IP Adress ... [21:01:06] Snapshot smvi__VMware Daily_recent was found in volume OsVol. [21:01:07] Looking for snapshot smvi__VMware Daily_recent on controller Override IP Adress ... [21:01:07] Snapshot smvi__VMware Daily_recent was found in volume DataVol. [21:01:07] Running ZAPI snapvault-primary-relationship-status-list-iter-start on storage controller Override IP Adress ... [21:01:07] Running ZAPI snapvault-primary-relationship-status-list-iter-next on Override IP Adress ... [21:01:07] SnapVault relationship found (first) (primary = Filer:/vol/OsVol/qtree_os, secondary = nearstore:/vol/backup/os) ... [21:01:07] Running ZAPI snapvault-primary-relationship-status-list-iter-end on Override IP Adress ... [21:01:07] Running ZAPI snapvault-primary-relationship-status-list-iter-start on storage controller Override IP Adress ... [21:01:07] Running ZAPI snapvault-primary-relationship-status-list-iter-next on Override IP Adress ... [21:01:07] SnapVault relationship found (first) (primary = Filer:/vol/DataVol/qtree_data, secondary = nearstore:/vol/backup/data) ... [21:01:07] Running ZAPI snapvault-primary-relationship-status-list-iter-end on Override IP Adress ... [21:01:07] Initializing connectivity to storage controller ... [21:01:07] Attempting to ping storage controller nearstore ... [21:01:07] Ping of storage controller nearstore successful. [21:01:07] Logging into storage controller nearstore ... [21:01:07] Setting username and password for storage controller nearstore ... [21:01:07] Testing login by ONTAP version from storage controller nearstore ... [21:01:07] ONTAP version: NetApp Release 8.0.2P6 7-Mode: Fri Jan 27 14:46:25 PST 2012 [21:01:07] Storage appliance login successful. [21:01:07] Running ZAPI snapvault-secondary-initiate-incremental-transfer on storage controller nearstore, snapshot smvi__VMware Daily_recent, secondary path /vol/backup/os ... [21:01:09] SnapVault incremental transfer started successfully. [21:01:09] Running ZAPI snapvault-secondary-initiate-incremental-transfer on storage controller nearstore, snapshot smvi__VMware Daily_recent, secondary path /vol/backup/data ... [21:01:10] SnapVault incremental transfer started successfully. [21:01:10] Running ZAPI snapvault-secondary-get-relationship-status on storage controller nearstore, path /vol/backup/os ... [21:01:10] Relationship for path /vol/backup/os is still running ... [21:01:10] Running ZAPI snapvault-secondary-get-relationship-status on storage controller nearstore, path /vol/backup/data ... [21:01:10] Relationship for path /vol/backup/data is still running ... [21:01:10] More relationships need to be updated, sleeping for 30 seconds ... [21:01:40] Running ZAPI snapvault-secondary-get-relationship-status on storage controller nearstore, path /vol/backup/os ... [21:01:41] Relationship for path /vol/backup/os is idle, removing from the list to check. [21:01:41] Running ZAPI snapvault-secondary-get-relationship-status on storage controller nearstore, path /vol/backup/data ... [21:01:41] Relationship for path /vol/backup/data is idle, removing from the list to check. [21:01:41] All relationships updated. [21:01:41] Duplicate SnapVault secondary controller/volume found (nearstore:/vol//vol/backup), removing duplicate from list. [21:01:41] SnapVault secondary snapshot(s) to be taken on nearstore:/vol/backup/os. [21:01:41] Creating a SnapVault secondary snapshot for volume /vol/backup using schedule sv_smvi ... [21:01:41] SnapVault secondary snapshot created successfully. [21:01:41] A total of 2 SnapVault relationship update(s) and 1 SnapVault snapshot creation(s) successful. [21:01:41] Command completed successfully. [21:01:41] End Time: Mon Nov 26 21:01:41 2012 ----------------------------------------------------- Exiting with return code:0 - But in the Eventlog from Windows I have an Error everytime sv-smvi is running as a Part of the VSC Backup: 390716018 [backup4 1d4f1e1fbe753ae60afbaf02e7f12cb4] ERROR com.netapp.common.flow.JDBCPersistenceManager - FLOW-10209: Error logging operation message to database: A truncation error was encountered trying to shrink VARCHAR 'Script sv-smvi.cmd completed with output: C:\Program Files&' to length 2048. java.sql.SQLDataException: A truncation error was encountered trying to shrink VARCHAR 'Script sv-smvi.cmd completed with output:C:\Program Files&' to length 2048. at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source) at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(Unknown Source) at com.netapp.common.flow.JDBCPersistenceManager.addMessage(JDBCPersistenceManager.java:845) at com.netapp.common.flow.OperationLogListener.append(OperationLogListener.java:38) at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.netapp.common.util.MulticastProxy.invoke(MulticastProxy.java:49) at $Proxy8.append(Unknown Source) at com.netapp.common.logging.NALogger.notifyListeners(NALogger.java:98) at com.netapp.common.logging.NALogger.log(NALogger.java:149) at com.netapp.common.logging.NALogger.log(NALogger.java:175) at com.netapp.common.logging.NALogger.info(NALogger.java:228) at com.netapp.smvi.task.scripting.ScriptExecutionTask.execute(ScriptExecutionTask.java:151) at com.netapp.common.flow.TaskInstanceTemplate.execute(TaskInstanceTemplate.java:324) at com.netapp.common.flow.ForLoopTemplate.execute(ForLoopTemplate.java:136) at com.netapp.common.flow.Operation.executeCurrentStack(Operation.java:133) at com.netapp.common.flow.Operation.execute(Operation.java:59) at com.netapp.common.flow.Threadpool$OperationThread.run(Threadpool.java:254)Caused by: java.sql.SQLException: A truncation error was encountered trying to shrink VARCHAR 'Script sv-smvi.cmd completed with output:C:\Program Files&' to length 2048. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source) ... 26 moreCaused by: ERROR 22001: A truncation error was encountered trying to shrink VARCHAR 'Script sv-smvi.cmd completed with output:C:\Program Files&' to length 2048. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.types.SQLChar.hasNonBlankChars(Unknown Source) at org.apache.derby.iapi.types.SQLVarchar.normalize(Unknown Source) at org.apache.derby.iapi.types.SQLVarchar.normalize(Unknown Source) at org.apache.derby.iapi.types.DataTypeDescriptor.normalize(Unknown Source) at org.apache.derby.impl.sql.execute.NormalizeResultSet.normalizeColumn(Unknown Source) at org.apache.derby.impl.sql.execute.NormalizeResultSet.normalizeRow(Unknown Source) at org.apache.derby.impl.sql.execute.NormalizeResultSet.getNextRowCore(Unknown Source) at org.apache.derby.impl.sql.execute.DMLWriteResultSet.getNextRowCore(Unknown Source) at org.apache.derby.impl.sql.execute.InsertResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) ... 20 more Has someone the same Issue or/and a solution for this? TIA Thomas
... View more
Hi Keith I know this with 3.6P1, but we have had following Problem after the upgrade to 3.6. Thread Link : 23803 We will open a case about that, but in the moment we are migrating all our oracle DBs to Netapp/snapcreator and so we haven't time to investigate this Problem... But thanks anyway. You are the first we will inform, when we running 3.6P1 :-).. regards Thomas
... View more
Hi All Is following statement correct: SC 3.5 scheduler runs maximum three Jobs simultaneous (Seen in Job Monitor). When statement correct: Can I increase somewhere a Value, so that more than 3 Jobs runs simultaneous? regards and thanks Thomas
... View more
Hi I have received following Infos by mail: Dear Test Candidates, You are currently registered for NetApp Certification Exam 154 at Insight, and I encourage you to take Exam 155 instead for these reasons: Exam 154 is two years old, and its content is out of date. Enrollment for Exam 154 is set to expire soon, and might not be available for retakes if you do not pass it the first time. Exam 155 is the refreshed version of Exam 154, and contains two Data ONTAP 7-Mode 8.1.1 topics: describe Virtual Storage Tiering and Flash Pool, and their performance value (which make up <5% of the exam, so the old study materials are still sufficient). regards Thomas
... View more
Thanks for your answers, but we have had Problems after the Upgrade to 3.6. --> See following Thread. In the Moment we have installed 3.5 again... So I have to open a Case with the Problem "SC 3.6 and PM Integration" and after fixing this I have to upgrade to 3.6p1 to resolve the Scheduler Issue. Thanks Thomas
... View more
Hi Valéry We have the same Issue with Jobs(some schedules seems active but never launch the job until we modify)... In the meantime, do you have found a solution for this? TIA Thomas
... View more
FYI.... We have solved the Problem without upgrade to 3.6 :-)... The only thing we have changed: scAgent started multithreaded and the Problem was away... I dont know the deep why, but it work now... Thomas
... View more
Hi Andreas Thanks for your quick response.. In the moment we cant upgrade to SC 3.6, because something does not working with the Protection Manager Integration in our environment (with 3.6) :-(.. I have to wait for 3.6P1 with the hope our Protection Manager Problem is solved in the P1 Version...... regards Thomas
... View more
Hi Andreas We have the same Issue on one Server (Oracle on Linux with sc). We have sc agent 3.5.0.1 installed. Do you have fixed this Problem in your environment? When yes How? (fe.Upgrade to 3.6) TIA Thomas
... View more
Hi Mike In our environment we configure every partner interface with an IP... So we know always that this Interface would prober takeover the Partnerinterface. I recommend to configure every HA Environment balanced. That means you have all VLANs/IPs active on both Controllers......... Adapted to your enironment: netapp1 ifconfig NFS-972 10.72.12.12 netmask 255.255.254.0 mtusize 1500 -wins partner NFS-972 netapp2 ifconfig NFS-972 10.72.12.13 netmask 255.255.254.0 mtusize 1500 -wins partner NFS-972 Hope this help... Thomas
... View more
Hi Keith Should I open a case about this Problem... Because 3.5 runs fine now since a few days. So its a Problem in the 3.6 Version, I think.. regards Thomas
... View more
....And the timeframe is different. fe: Time from ########## Checking Protection Manager dataset snapcreator_AllDBs ########## till zapi Error 32s Time from ########## Checking Protection Manager dataset snapcreator_AllDBs ########## till zapi Error 3m17s I think i go now back to 3.5 to Test, wether its runs without problems.....
... View more
Hi Keith Thanks for your Quick Response.. We have increase the NTAP_TIMEOUT=1800, but its not working... This zapi error comes on different Steps. Somethimes at Step: "##########Creating Protection Manager Backup Version##########" (In this case Job in PM doesn't begin and the SC get red)... sometimes at step: ########## Getting Protection Manager backup progress ########## (In this case Job in PM runs successfully, but the SC get red)... It could be a timeout, but its not the NTAP_TIMEOUT. The Config which not working anymore is the biggest in our environment (62 Primary Volumes) regards, Thomas
... View more
Hi All Yesterday We have updated snapcreator from 3.5 to 3.6.. Since this update sometimes the connection between sc and pm fails with error: Fri Aug 31 09:50:39 2012] TRACE: ZAPI RESULT <results status="failed" reason="in Zapi::invoke, cannot connect to socket" errno="13001"></results> [Fri Aug 31 09:50:39 2012] ZAPI: (code = ) I think its not a configuration issue, because under 3.5 the Jobs runs without problems and the in 3.6 you see that the communication/Backup starts: ########## Getting Protection Manager backup progress ########## [Fri Aug 31 09:47:37 2012] INFO: Getting Protection Manager backup progress for job-id 47756 [Fri Aug 31 09:47:37 2012] INFO: Protection Manager backup progress get for job-id 47756 completed successfully [Fri Aug 31 09:47:37 2012] INFO: Protection Manager backup for job-id 47756 is running, Sleeping 1 minute [Fri Aug 31 09:48:37 2012] INFO: Getting Protection Manager backup progress for job-id 47756 [Fri Aug 31 09:48:38 2012] INFO: Protection Manager backup progress get for job-id 47756 completed successfully [Fri Aug 31 09:48:38 2012] INFO: Protection Manager backup for job-id 47756 is running, Sleeping 1 minute [Fri Aug 31 09:49:38 2012] INFO: Getting Protection Manager backup progress for job-id 47756 [Fri Aug 31 09:49:38 2012] INFO: Protection Manager backup progress get for job-id 47756 completed successfully [Fri Aug 31 09:49:38 2012] INFO: Protection Manager backup for job-id 47756 is running, Sleeping 1 minute [Fri Aug 31 09:50:38 2012] INFO: Getting Protection Manager backup progress for job-id 47756 [Fri Aug 31 09:50:39 2012] ZAPI: (code = ) Someone knows about this Problem or have had the same issue with 3.6? TIA Thomas
... View more
Hi Keith Currently using SC scheduler you can only create scheduler for a config not multiple configs Oky... If you have multiple configs running at same time and both hit same agent that isnt good. The agent will queue, it can only do one thing at a time in its default mode. You can on unix only start agent with --start-multithreaded-agent which allows for parallel processing but again only on unix. Snapcreator agent run on a Solaris System. So we can run multiple Configs at same time with the --start-multithreaded-agent. If you have multiple DBs on same host and need to backup them up at same time why not just back them up together with same config? In config you can set ORACLE_DATABASES=db1:oracle;db2:oracle etc...so you can have multiple DBs in there. You would probably want to back things up together if DBs shared same volumes etc. If DBs dont share same volumes it means restore will be single file only, you will never want to recover volume, ever, so you lose this capability if you go to shared backup. If DBs dont share same oracle home you can set ORACLE_HOME_SID so ORACLE_HOME_DB1=/path/to/orahome/db1 ORACLE_HOME_DB2=/path/to/orahome/db2. At this time our snapcreator environment run the way you describe above. Our problem in the future is following. We have a Oracle Cluster and the DBs could be on both Nodes of the Cluster. When I have one Configfile with all DBs in(like we do it today) and a DB went moved from one Node to the other, I have to edit the Configfile, otherwise the Backup from this moved DB fails. When I contact the different DBs over the Cluster Ressource Adress, it doesn't matter on which Node the DB runs and the Backup runs without edit anything. The other nice behave of contacting every DB individual is that Protection Manager creates a Dataset from every DB and all snapvault relationships from one DB (redo1/redo2/bin/dbf) are on one Nearstore Volume (fully automated). In addition SC offers different schedules, so under NTAP_SNAPSHOT_RETENTIONS you can have daily:4,weekly:5,monthly:6. They you can run same job or config at different times with different retentions. We do it this way today.. regards and thanks Thomas
... View more