This is not possible with OM directly, but can be achieved, using the NHA, which give details about the file last accessed and things. Once this report is generated using NHA use dfm and NMSDK to delete those files from the volume. regards adai
... View more
SV Pri and Sec both license on a single node is only supported starting ONTAP version 7.3 or latter. Ontap doesn't allow,the source and destination qtrees to be within the same volume, in case of snapvaulting within the controller. And to add to shiva point, PM doesn't support Snapvaulting within the same Aggr, in case of snapvaulting within the controller, as it isn't a complete backup solution, as if a couple of disk in the aggr fails, then both the source and backup data are lost. Regards adai
... View more
Adaikkappan Arumugam wrote: Even if you create your SnapMirror Destination volume with none guarantee, after the snapmirror initialize it will have the same guarantee settings as the source volume. If you break the mirror in order to start using the SnapMirror destination as a live volume, won't the guarantee become 'volume' if the SnapMirror source volume is also set at 'volume'? No. Adaikkappan Arumugam wrote: So you cant have different guarantee for source and destination of a snapmirror. Yes you can, depends on the version https://kb.netapp.com/support/index?page=content&id=2011568 Thanks,I learnt this new. Adaikkappan Arumugam wrote: Also the Aggregate over commitment does not take into consideration the volume guarantee. Yes it does, this is why having SnapMirror destination volumes with a guarantee of 'none' within the aggregate clouds the issue. Aggregate overcommitment is the prediction of space required within the aggregate for volumes it contains when those volumes are guaranteed as 'volume', i.e. fully fat provisioned - I sound pretty confident here, but I'm happy to be corrected This is a confusing issue and I haven't been able to find any clarity with any of the NetApp tools. My aggregates contain live volumes and SnapMirror destination volumes to cater for a DR scenario. I want to ensure I have the space available within the aggregate when the time arises I need to use these SnapMirror destination volumes in a DR situation. Other than manually going through the volumes and totting up the fully guaranteed space, I haven't found a NetApp tool that will help me. Cheers Aggr overcommitement, basically says this, First example i have a aggr of 100G, if i create two volumes of 100G with guarantee none, then it means my aggr is overcommited by 100% Take second example same 100G aggr, 2 volumes of 100G with guarantee, volume and none, then also it means my aggr is overcommited by 100% Third example. same 100G aggr, 2 volumes of 100G with guarantee volume can be created, as we dont have space in the aggr, only one volume with volume guarntee be created, so here there is no quesiton of overcommitment. Havae you take look at the reports provided by Operations Manager on the overcommitments ? Regards adai
... View more
Hi By "Protection Manager without Provisioning Manager" do you mean the primary provisioning ? or the secondary provisioning ? Protection manager works either ways with or without Provisioning Manager.As said by pete, if you allow Protection Manger to create its own secondary volume,(we call it secondary provisioning) then you dont have the hassle of resizing the destination volumes when primary volumes are resized, there by reducing the number of backup failures. ProtMgr when it resizes takes into account, whether the volume is dedupe enabled, if so whats the filer model and whats the ontap version and what's the supported dedupe max volume limit. With all being said, if I read yours post correctly, you want to do Whole Volume Snapvault ? i.e. Entire Source volume into a qtree in the secondary volume ? like filer1:/src_vol---SV--->filer2:/dst_vol/src_vol_qt ? If so Prot Mgr doesnt support Whole volume support, we neither create them or discover them. Prot Mgr only creates or discovers qtree to qtee snapvault like this. filer1:/src_vol/qt1---SV--->filer2:/dst_vol/qt1 Regards adai
... View more
Even if you create your SnapMirror Destination volume with none guarantee, after the snapmirror initialize it will have the same guarantee settings as the source volume. So you cant have different guarantee for source and destination of a snapmirror. Also the Aggregate over commitment does not take into consideration the volume guarantee. Regards adai
... View more
I think what you mean, but then i will have kindo of DR policy, i just want to retain 10 days of my virtual machines on local and then retain a year on my secondary node (backup), there is any way to do it using Snapvault? Have different retention in the primary node and the secondary node. [root@lnx~]# dfpm policy node get 61 Node Id: 1 Node Name: Primary data Hourly Retention Count: 2 Hourly Retention Duration: 86400 Daily Retention Count: 2 Daily Retention Duration: 604800 Weekly Retention Count: 1 Weekly Retention Duration: 1209600 Monthly Retention Count: 0 Monthly Retention Duration: 0 Backup Script Path: Backup Script Run As: Failover Script Path: Failover Script Run As: Snapshot Schedule Id: 46 Snapshot Schedule Name: Sunday at midnight with daily and hourly Warning Lag Enabled: Yes Warning Lag Threshold: 129600 Error Lag Enabled: Yes Error Lag Threshold: 172800 Node Id: 2 Node Name: Backup Hourly Retention Count: 0 Hourly Retention Duration: 0 Daily Retention Count: 2 Daily Retention Duration: 1209600 Weekly Retention Count: 2 Weekly Retention Duration: 4838400 Monthly Retention Count: 1 Monthly Retention Duration: 8467200 [root@ln~]# If I'm using snapvault integrated with SMVI i could have a consistent snapshots of my virtual machines, if i don't use smvi the all the snapshots from PM will not be a consistent snapshots because i have to call smvi to take the snapshots before the replica starts. You can use NetApp Managebility SDK, to register the snapshots taken by SMVI as a backup version in PM, and also create an Application Dataset in PM so that PM will only transfer the registered consistent snapshot to the secondary instead of taking its own snapshot. Regards adai
... View more
Yes. You will have to remove the shares, run dfm host discover and then try the migrate. Pls note share will not be created by SSM, you will have to manually recreate it. Pls upgrade to 4.0.1 Regards adai
... View more
Hi We have a feature in DFM 4.0 called Secondary Space Management, which migrates individual volume, which are part of dataset. The rule or conditions that needs to be met for a volume to be migrate capable is the following.SSM doesn't migrate the following. Root Volume of a filer or vfiler is not migration capable. Volumes with client facing protocols like CIFS, NFS, iSCSI,FCP. Volume which are parents of flexclones. Volume which have unmanaged relationships. I think, just item 2 applies to you.If you remove the client facing protocols you will be able to migrate using SSM, but during the entire duration of initial baseline from old primary to new primary. After the inital baseline SSM will modify the relationship by that we you dont have to rebaseline the downstream relationship and also wont loose any already registered backup. Regards adai
... View more
So by default ... we display only up to a year. In Web UI beyond 1 year use cli. -- do we purge after a year? Yearly data is kept in Operations Manager database forever. -- is there a way to adjust this? ( is it global or by system ) There is no options, to control or adjust. Adai ... I think using the CSV option could work. Do i use syntax like : dfm report view ... 2y? Use the dfm graph cli. D:\>dfm graph help NAME graph -- create graphs of data over time SYNOPSIS dfm graph [ <options> ] <graph-name> <name-or-id-to-graph> DESCRIPTION The graph command generates data over time for a particular item in the database. Use 'dfm graph' with no arguments to get the list of graphs. The options are -s <start-date> -e <end-date> -D <date-format> -F <output-format> -h <height> -w <width> The <date-format> is a time format string as defined by the strftime library routine. The default date format is the one appropriate for your locale. The <start-date> is the number of seconds in the past that the graph should start; the <end-date> is the number of seconds in the future that the graph should end. Use a negative value for <end-date> if the graph should stop in the past. The <height> and <width> are the height and width of the graph image in pixels. These options are applicable to image formats only i.e. when <output-format> is specified as png or gif. Use a suffix afer the <graph-name> to select a different range of valid dates; use suffix -1d (the default) for data going back 24-48 hours -1w for data going back 1-2 weeks -1m for data going back 1-2 months -3m for data going back 3-6 months -1y for data going back indefinitely The values above are ranges because the amount of data still available depends on whether the current time is near the beginning or the end of a day, week, month, or quarter. For example, to see the last 60 days of CPU usage for a particular storage system, use the "-3m" suffix, because "-1m" may not have all the data you request. The command is $ dfm graph -s 5184000 -e 0 cpu-3m system1 The <output-format> is the type of format in which the output will be generated. Supported output formats are text, html, csv, xls, png and gif. The output for html, png and gif is in binary format and needs to be redirected to a file with proper extension to open it. For example, to see the output of the graph 'volume-usage' in gif format, issue the command like below $ dfm graph -F gif volume-usage > graph.gif D:\> Regards adai
... View more
Yes you must be able to use mullti select in NMC UI .Also after setting up this for one host you can copy the data collection template, and apply it for multiple host. Below are the cli for doing the same. D:\>dfm perf data describe help NAME describe -- Describe a counter group SYNOPSIS dfm perf data describe [-v] <counter-group-name> <host-name-or-id> DESCRIPTION Describe the details of the counter groups like file name, instances, storage space details etc. If -v flag is specified, verbose information is displayed. It includes counters, object instances details for which data is being collected. D:\> D:\>dfm perf data modify help NAME modify -- Modify the interval and collection details for a counter group SYNOPSIS dfm perf data modify [ -f ] -G <counter-group-name> [ -o <host-name-or-id> ] [ -s <sample-rate> ] [ -r <retention-period> ] DESCRIPTION Modify the interval and retention details for a counter group. Specify a positive number, optionally followed by a time period suffix to indicate seconds, minutes, hours, days, or weeks. The default time period suffix is seconds. 'host-name-or-id' is mandatory for default counter groups. Use -f option to force the change of sample rate and retention period when the number of records for the view decreases. D:\> Regards adai
... View more
Are you asking for the Graph data in OM or Perfdata ? If its the graph, use the cli dfm graph with -F csv to get data beyond 1 year. Regards adai
... View more
To list a OSSV host you must use dfbm primary host list. I will check with my setup and update you. Thanks for your time and patience. Regards adai
... View more
Having the roles you listed will allow, you do backup and restore but for all host in your dfm. But if you want to restrict it to a specify group create a role with the capabilities that a global backup and global restore have on the group you want. Regards adai
... View more
Hi Let you know once I try it out to figure the problem in house. BTW can you get the output of the below command ? dfm detail 75. As i see this error,Failed to run ping monitor on host opsmgr.ontapsim.com (75). Regards adai
... View more
AFAIK when the entire host is added PM protects it completely.All drive. It should ideally honor the exclusion list specified in OSSV though I am not sure why they don't. what is gave was a way to exclude from PM side,irrespective of OSSV exclusion list. Regards adai
... View more
AFAIK the exclusion list is applicable only if you use ossv to create snapvault rels. When you use PM it doenst honor that. Pls use the below cli to ignore the primary dir from being backedup. dfbm primary dir ignore <dirname/id> Regards adai
... View more
Hi, Are they doing anything with Host Agent or planning to do with it ? Can you also get the output of installcheck output from the ossv host ? Yes, upgrades are seamless and not data is lost in the process. For added security you can take backup of dfm db before upgrade and keep it. In the event of unexpected failure you can restore this db and come back online Regards adai
... View more