Did you know that…
You can easily test the SVM-DR destination disaster recovery without breaking the SnapMirror relationship?
Note: In an existing SnapMirror relationship, these steps will help you test the disaster recovery solution without breaking the SnapMirror relationship.
In a SnapMirror relationship, select the volumes to be tested. destination::> snapmirror show -expand
Create a default subtype vserver on the destination cluster. destination::> vserver create -vserver <clone_vserver> -subtype default Note: Create LIFs and export polices to enable NFS traffic.
Create a clone on the destination cluster. Select the parent volume and parent Snapshot copy from the SnapMirror destination vserver as parent vserver. destination::> vol clone create -vserver <clone_vserver> -flexclone <flex_cone_volume> -type RW -parent-vserver <SVM-DR Destination vserver> -parent-volume <parent_volume> -junction-active true -foreground true -parent-snapshot <parent_snapshot>
Mount the FlexClone volumes on the junction path. destination::> vol mount -vserver <clone_vserver> -volume <flex_cone_volume> -junction-path /<flex_cone_volume_mountpath>
Mount through NFS on the desired test client. FlexClone volumes should be visible on the client.
For more information about ONTAP, see the ONTAP 9 Documentation Center.
... View more
Did you know you can…?
Easily convert a synchronous SnapMirror relationship to asynchronous
Quiesce the synchronous SnapMirror relationship: destination::> snapmirror quiesce -destination-path vs1:vol1
Delete the synchronous SnapMirror relationship: destination::> snapmirror delete -destination-path vs1:vol1
Release the synchronous SnapMirror relationship on the source side with the option to retain the Snapshot copies: source::> snapmirror release -destination-path vs1:vol1 -relationship-info-only true Note: The default release operation (without setting relationship-info-only to true) deletes the Snapshot copies created by this relationship. This process does not allow a resync operation for a new relationship created for these volumes.
Create the asynchronous SnapMirror relationship by specifying the required policy: destination::> snapmirror create -source-path vs0:vol1 -destination-path vs1:vol1 -policy MirrorAllSnapshot
Resync the asynchronous SnapMirror relationship: destination::> snapmirror resync -destination-path vs1:vol1
For more information about ONTAP, see the ONTAP 9 Documentation Center
... View more
Did you know you can…?
Easily convert a SnapMirror relationship to zero recovery point objective (RPO)
Verify the network round-trip time (RTT) between the two nodes is less than 10ms.
Add a SnapMirror synchronous license to the source node.
Delete the asynchronous SnapMirror relationship: destination::> snapmirror delete -destination-path vs1:vol1
Release the asynchronous SnapMirror relationship on the source side with the option to retain the Snapshot copies: source::> snapmirror release -destination-path vs1:vol1 -relationship-info-only true Note: The default release operation (without setting relationship-info-only to true) deletes the Snapshot copies created by this relationship. This process does not allow a resync operation for a new relationship created for these volumes.
Create the synchronous SnapMirror relationship by specifying a policy of type sync-mirror or strict-sync-mirror: destination::> snapmirror create -source-path vs0:vol1 -destination-path vs1:vol1 -policy Sync Note: To create a strict synchronous relationship, use the StrictSync policy instead.
Resync the synchronous SnapMirror relationship: destination::> snapmirror resync -destination-path vs1:vol1
For more information about ONTAP, see the ONTAP 9 Documentation Center
... View more
Did you know you can…?
Calculate your round-trip time (RTT) for SnapMirror Synchronous
Run the cluster peer ping command, where the originating-node is the node that hosts the source volume of the relationship and the destination-node is the node that hosts the destination volume of the relationship.
cluster::> cluster peer ping -originating-node sti8080-475 -destination-node sti8080-477
Node: sti8080-475 Destination Cluster: C2_sti8080-477_cluster
Destination Node IP Address Count TTL RTT(ms) Status
---------------- ----------- ----- ---- ------- -------------------------
sti8080-477 172.26.145.10 1 64 0.452 interface_reachable
sti8080-477 172.26.145.12 1 64 0.352 interface_reachable
For intracluster relationships, you can run the network ping command in a similar way. cluster::> network ping -node sti8080-477 -destination sti8080-475 -show-detail
For more information about ONTAP, see the ONTAP 9 Documentation Center.
... View more
Did you know...
You can use MetroCluster switchback functionality after a MetroCluster switchover operation?
NetApp MetroCluster provides synchronous replication for NetApp ONTAP customers and consists of an ONTAP cluster at each customer site. When a site goes down, the switchover functionality is used to move the customer workload to the remote site.
Use the MetroCluster switchback functionality after a MetroCluster switchover operation restores a workload to a remote disaster-stricken site:
1. Verify that the disaster-stricken cluster meets the following requirements:
a. Power and connectivity have been restored to switches, intersite links, and disk shelves.
b. Nodes are rebooted (if necessary).
c. Aggregates are resynchronized through the healing functionality: metrocluster heal-phase root and metrocluster heal-phase aggregates.
2. Invoke the MetroCluster switchback CLI.
3. Run the metrocluster operation show command to view the status of the switchback operation.
For more information about MetroCluster, see the MetroCluster Management and Disaster Recovery Guide .
For more information about ONTAP, see the ONTAP 9 Documentation Center.
... View more
Did you know...
You can use MetroCluster switchover functionality for synchronous replication?
NetApp MetroCluster provides synchronous replication for NetApp ONTAP customers and consists of an ONTAP cluster at each customer site. When a site goes down, the switchover functionality is used to move the customer workload to the remote site.
Planned outage:
If you are planning an outage due to disaster recovery testing or a planned power outage, run the metrocluster switchover command while both sites are up/functional. If you run the command at the site that will survive, it will shut down in an orderly fashion.
Unplanned outage:
When the remote cluster is already down, force a switchover to start switchover processing:
Make sure the remote site is down. Note: This step might include halting nodes.
Run the metrocluster switchover -forced-on-disaster true command. When prompted to continue with the switchover, enter yes.
For either type of switchover, run the metrocluster operation show command to view the status of the switchover operation.
For more information about MetroCluster, see the MetroCluster Management and Disaster Recovery Guide.
For more information about ONTAP, see the ONTAP 9 Documentation Center .
... View more
Did you know that...
There is a simple, consolidated ONTAP command to display a system health summary and alerts, if any?
For example:
cluster_1::> system health subsystem show
Subsystem Health
----------------- ------------------
SAS-connect ok
Environment ok
Memory ok
Service-Processor ok
Switch-Health ok
CIFS-NDO ok
Motherboard ok
IO ok
MetroCluster ok
MetroCluster_Node ok
FHM-Switch ok
FHM-Bridge ok
SAS-connect_Cluster
ok
13 entries were displayed.
cluster_1::> system health alert show
This table is currently empty.
cluster_1::> system health config show
Node Monitor Subsystem Health
------------- ---------------------- --------------- ------------------
node1 node-connect SAS-connect, CIFS-NDO, MetroCluster_Node
ok
node1 system-connect SAS-connect_Cluster, MetroCluster, FHM-Switch, FHM-Bridge
ok
node1 system - ok
node1 controller Environment, Memory, Service-Processor, Motherboard, IO
ok
node1 chassis Environment ok
node1 cluster-switch Switch-Health ok
node2 node-connect SAS-connect, CIFS-NDO, MetroCluster_Node
ok
node2 controller Environment, Memory, Service-Processor, Motherboard, IO
ok
8 entries were displayed.
For more information, visit the ONTAP 9 Documentation Center.
... View more
Did you know that… You can start the object store profiler through the ONTAP CLI to test latency and throughput performance?
NetApp recommends validating the latency and throughput of your specific network environment to determine the impact it has on FabricPool performance.
Starting in ONTAP 9.4, an object store profiler is available through the ONTAP CLI. You can test latency and throughput performance of object stores before you attach them to FabricPool aggregates.
Note: The external capacity tier must be added to ONTAP before it can be used with the object store profiler.
Start the object store profiler (advanced privilege level required). storage aggregate object-store profiler start -object-store-name <name> -node <name>
View the results. storage aggregate object-store profiler show
Note: External capacity tiers do not provide performance similar to that found on the performance tier (typically gigabytes per second).
For more information about FabricPool, see TR-4598: FabricPool Best Practices .
For more information about ONTAP, see the ONTAP 9 Documentation Center .
... View more
Did you know…
You can use inactive data reporting on non-FabricPool aggregates?
First available in ONTAP 9.4, inactive data reporting (IDR) is an excellent tool for determining the amount of existing inactive (cold) data that can be tiered from a high-performance SSD aggregate to low-cost object storage.
IDR uses a 31-day cooling period to determine which data is considered inactive.
IDR is displayed on the Storage Tiers page in OnCommand System Manager.
IDR is enabled by default on FabricPool aggregates, but more importantly, especially for those considering taking advantage of FabricPool for the first time, you can enable IDR on non-FabricPool aggregates.
Note: IDR cannot be enabled on aggregates where FabricPool cannot be enabled (for example, root, HDD aggregates, NetApp MetroCluster, and so on).
From the ONTAP CLI, complete the following steps:
To enable IDR on a non-FabricPool aggregate, run the following command: storage aggregate modify -aggregate <name> -is-inactive-data-reporting-enabled true
As an alternative to viewing the amount of inactive data on an aggregate in OnCommand System Manager, you can view the amount of inactive data by running the following command: storage aggregate show-space -fields performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent
For more information about FabricPool, see TR-4598: FabricPool Best Practices .
For more information about ONTAP, see the ONTAP 9 Documentation Center .
... View more
Did you know…
You can create a FabricPool aggregate by using OnCommand System Manager?
FabricPool works by associating an object store (such as Amazon S3, Microsoft Azure Blob Storage, or StorageGRID Webscale) with an aggregate in ONTAP, creating a composite aggregate: a FabricPool aggregate.
Note: FabricPool requires a capacity-based license when attaching third-party object storage providers (such as Amazon S3) as capacity tiers for AFF and FAS hybrid flash systems. A FabricPool license is not required when using StorageGRID Webscale.
To create a FabricPool aggregate, complete this two-part process:
Part 1: Add the object store to ONTAP.
ONTAP must be able to communicate with the object store before the object store can be attached to an aggregate.
Before you add the object store to ONTAP, you need to identify the following information:
Server name (FQDN; for example, amazonaws.com )
Access key ID
Secret key
Container name (bucket name)
To add the object store to ONTAP, complete the following steps:
Launch OnCommand System Manager.
Click Storage.
Click Aggregates & Disks.
Click External Capacity Tiers.
Select an object store provider and click Add.
Complete the text fields as required for your object store provider.
Click Save and Attach Aggregates.
Part 2: Attach the object store to an aggregate.
You can complete this task can by using OnCommand System Manager.
More than one object store can be connected to a cluster, but only one type of object store can be attached to each aggregate. For example, one aggregate can use StorageGRID Webscale, and another aggregate can use Amazon S3, but one aggregate cannot be attached to both.
Note: Attaching an external capacity tier to an aggregate is a permanent action. After being attached, an external capacity tier cannot be removed.
To attach the object store to an aggregate, complete the following steps:
Launch OnCommand System Manager.
Click Applications and Tiers.
Click Storage Tiers.
Click an aggregate.
Click Actions and select Attach External Capacity Tier.
Select an external capacity tier.
View and update the tiering policies for the volumes on the aggregate (optional). By default, volume tiering policies are set as Snapshot-Only.
Click Save.
For more information about FabricPool, see TR-4598: FabricPool Best Practices .
For more information about ONTAP, see the ONTAP 9 Documentation Center .
... View more
Did you know that…
You can set FabricPool volume tiering policies by using OnCommand System Manager?
After volume creation, you can change the FabricPool volume tiering policy to one of four policies by completing the following steps using OnCommand System Manager:
Launch OnCommand System Manager.
Select a volume.
Click Actions and select Change Tiering Policy.
Select the tiering policy (policy descriptions provided below) you want to apply to the volume. By default, volume tiering policies are set as Snapshot-Only.
Click Save.
For more information about FabricPool, see TR-4598: FabricPool Best Practices .
For more information about ONTAP, visit the ONTAP 9 Documentation Center.
... View more
Did you know that…
You can create a nonshared quality-of-service (QoS) policy?
Customers who want to individually control workloads (volumes, LUNs, or files) with QoS have to create a QoS policy per workload. This results in a proliferation of QoS policies, even though just a few policies such as gold, silver, bronze would meet the customer's needs. With the release of NetApp ONTAP 9.4, a nonshared policy can be created in which each assigned workload gets the policy.
For example, a test application requires three volumes with a QoS ceiling of 1,000 IOPS per volume. Prior to ONTAP 9.4, three QoS policies would be required to achieve this. Starting with ONTAP 9.4, you can create a single nonshared QoS policy to use with all three volumes assigned to the policy.
To create a nonshared QoS policy, complete the following steps in the ONTAP CLI:
1. Create a nonshared QoS policy with a ceiling of 1,000 IOPS.
::> qos policy-group create -policy-group bronze -max-throughput 1000IOPS -is-shared false
2. Assign each volume to the QoS policy.
::> volume modify -volume test_vol1 -qos-policy-group bronze
[Job 40] Job succeeded: volume modify succeeded
::> volume modify -volume test_vol2 -qos-policy-group bronze
[Job 41] Job succeeded: volume modify succeeded
::> volume modify -volume test_vol3 -qos-policy-group bronze
[Job 42] Job succeeded: volume modify succeeded
3. Make sure each volume can serve a maximum of 1,000 IOPS.
::> qos statistics workload performance show
Workload IOPS Throughput
------------ ---- ----------
test_vol1 1000 4000KB/s
test_vol2 1000 4000KB/s
test_vol3 1000 4000KB/s
For more information about ONTAP, visit the ONTAP 9 Documentation Center.
... View more
Did you know that…
You can easily assign a LUN or file to an adaptive quality-of-service (QoS) policy?
Complete the following step in the NetApp ONTAP CLI:
During LUN creation, assign an adaptive QoS policy: lun create -volume vol1 -lun lun1 -size 1TB -ostype windows -qos-adaptive-policy-group extreme
For existing LUNs, you can modify or assign an adaptive QoS policy: lun modify -volume vol1 -lun lun1 -qos-adaptive-policy-group performance
For existing files, you can modify or assign an adaptive QoS policy: file modify -vserver vs0 -volume vol1 -file file1 -qos-adaptive-policy-group value
There are three default adaptive QoS policies in ONTAP as well as the ability to create custom policies:
::> qos adaptive-policy-group show
Expected Peak
Name Vserver Wklds IOPS IOPS
---- ------- ----- -------- ----
extreme cluster 0 6144IOPS/TB 12288IOPS/TB
performance cluster 0 2048IOPS/TB 4096IOPS/TB
value cluster 0 128IOPS/TB 512IOPS/TB
Note: When a LUN or file is assigned to an adaptive QoS policy, the IOPS allocated is set based on the size and the amount of space used.
For a 1TB LUN assigned to the performance policy, the floor and ceiling start at 2,048 IOPS. As more space is used in the LUN the ceiling increases automatically to a maximum of 4,096 IOPS.
For a small LUN, a fixed absolute minimum number of IOPS is allocated. For example, if a 20GB LUN is assigned to the performance policy, the floor and ceiling are set to 500 IOPS.
The three default adaptive QoS policies provide the following absolute minimum number of IOPS:
Extreme: 1,000 IOPS
Performance: 500 IOPS
Value: 75 IOPS
For more information about ONTAP, visit the ONTAP 9 Documentation Center.
... View more
Did you know that...
You can capture first failure data using NetApp AutoSupport On Demand performance archive?
You can send up to six hours of performance archive data using a single command. It’s that easy! To send performance archives for a duration of greater than six hours, you can issue back-to-back commands. You also have the flexibility to send performance archives that are collected before, during, and after the issue being troubleshooted has occurred.
Run the following CLI command to send a performance archive as well as a message and corresponding case number. In this example, we collected the archive files from each node in the cluster starting on 12/03/2017 at 8:00 a.m. for a duration of six hours.
cluster1::> system node autosupport invoke-performance-archive -node * -start-date 12/03/2017 8:00:00 -duration 6h -message "Some perf date for 12/03" -case-number 2006123456
Note: For further analysis, contact NetApp support before or after you upload the performance archives.
For more information about using and setting up AutoSupport On Demand, see:
Uploading Performance Archive Files
TR-4444: ONTAP AutoSupport and AutoSupport On Demand
For more information about ONTAP, see the ONTAP 9 Documentation Center.
... View more
Did you know that...
You can automate storage provisioning workflows using NetApp OnCommand Workflow Automation (WFA) for MongoDB?
The WFA template for MongoDB provides a guided, automated methodology for provisioning MongoDB storage using the MongoDB vocabulary. The workflow template converts the MongoDB architectural specifications into storage requirements and then provisions that storage.
1. Download and install OnCommand Workflow Automation for Linux from the software downloads section of http://mysupport.netapp.com/.
2. Follow the prompts to download the software and installation guide, and then follow the instructions to install the software and template for MongoDB.
3. Start the workflow to add a MongoDB host:
a. Add a new data center group or choose an existing one from the drop-down menu.
b. Enter the requested information into the GUI, and then click Execute.
4. Start the Storage Controller Details workflow, choose a data center group, enter the requested data, and then click Execute.
5. Start the Provision Storage for MongoDB workflow:
a. Choose the desired MongoDB Scaling Technology (ReplicaSet or Sharding).
b. Enter the requested information into the GUI, and then click Execute.
Repeat this process as needed to provision the desired MongoDB deployment.
For more information about end-to-end storage provisioning for MongoDB, see TR-4674: End-to-End Storage Provisioning for MongoDB.
For more information about ONTAP, visit the ONTAP 9 Documentation Center.
... View more
Did you know you can…?
Identify top clients and top files by either IOPS or Throughput.
To identify top clients and files, follow these steps in OnCommand System Manager:
1. Select Dashboard.
2. In Applications and Objects, select Objects.
3. Choose Top Clients or Top Files.
4. Choose by IOPS or by Throughput.
5. A sorted list of the top 20 clients or top 20 files will be displayed.
For top clients, the IP address of each client is displayed along with the IOPS or throughput processed by ONTAP.
For top files, the file path is displayed along with the IOPS or throughput processed by ONTAP.
There is no performance impact when top clients or top files monitoring is enabled in ONTAP.
Top clients and top files is supported for both CIFS and NFS protocols.
For CLI users, top clients and top files is available using the statistics top client and statistics top file commands.
For more information, please access the ONTAP 9 documentation center
... View more
Did you know you can.... Use QoS to prioritize business critical applications? This recipe describes how to use the QoS Floor feature to guarantee a Business Critical Application gets its required service of 40,000 IOPS regardless of interference coming from the Test and Dev Applications. 1. In System Manager, select the Business Critical Volume followed by Actions, Storage QoS 2. Set the Minimum Throughput to 40,000IOPS Check the performance dashboard to verify that the service level guarantee of 40,000 IOPS is met and latency is reduced. Note that the test and dev workloads may see impacts meaning fewer IOPS and higher latencies. For more information, visit the ONTAP 9 Documentation Center.
... View more
Did you know you can... Configure NFS Connector with Apache Tez in HDInsight This recipe describes how to configure Apache Tez with the Netapp NFS Connector. This guide is specific to HDInsight clusters in Microsoft Azure. The steps in this guide can be generalized to any Hadoop cluster running Tez. This recipe is specific to HDInsight clusters in Microsoft Azure. The stepscan be generalized to any Hadoop cluster running Tez. 1. Configure the user “hive” and the group “hadoop” across all workers in your Hadoop cluster. sudo /netappnfs/usergroup.py -u hive -g hadoop -t /netappnfs 2. Copy the Netapp NFS Connector jar files to the fs.defaultFS as configured in Ambari. hdfs dfs -mkdir /{path to desired location}/aux-jars hdfs dfs -copyFromLocal /netappnfs/hadoop-nfs-2.7.1.jar /{path to desired location}/aux-jars hdfs dfs -copyFromLocal /netappnfs/hadoop-nfs-connector-2.0.0.jar /{path to desired location}/aux-jars 3. Add the new auxiliary jar location to the tez-site.conf file (reference). 4. Add the auxiliary jar path using Ambari: 5. Manually edit tez-site.conf, adding the following lines: <tez.aux.uris> /{path to desired location}/aux-jars </tez.aux.uris> 6.Restart the Tez, Hive, Oozie, etc. services
... View more
Did you know you can... Configure Oracle ASM with multiple LUNs oer disk group? Oracle ASM allows for disk groups to be created which span multiple disks or LUN’s. ONTAP LUN’s will appear as Disk Paths in the ASM Configuration Assistant. Multiple LUN’s can be assigned to ASM Disk Groups to improve performance when using ONTAP 9.3. This recipe describes how to assign multiple LUN’s to an ASM Disk Group: 1. Using the ASM Configuration Assistant, click on the Disk Groups tab. 2. From the Disk Groups tab, select “Create” to launch the “Create Disk Group” window. 3. In the “Create Disk Group” window, enter a name for the new Disk Group in the Disk Group Name field. In our case, we assigned 32 LUN’s to the Disk Group named ORA_DATA. 4. In the “Redundancy” pane, select desired Oracle Redundancy. In this case, we selected External (None) as extra redundancy was not needed for this database. 5. in the “Select Member Disks” pane, check the box for each disk path you wish to use. (The Disk Paths displayed are your previously defined LUN’s visible from your Oracle host.) 6. Click on the “Advanced” button at the bottom of the “Create Disk Group” window to display the “Disk Group Attributes”. 7. In the “Disk Group Attributes” pane, select 64 as the “Allocation Unit Size (MB)”. 8. Click “OK” to create the Disk Group. A window will pop up with the message “Disk Group created successfully.” Click “OK”. See related Recipe to create an Oracle RAC Application on SAN: https://community.netapp.com/t5/Data-ONTAP-Discussions/ONTAP-Recipes-Easily-create-an-Oracle-RAC-Application-on-SAN/m-p/132293
... View more
ONTAP Recipes: Did you know you can…? Easily create a NAS Application Container To create a NAS Application Container for use over NFS without compromising application or overall system performance, follow these steps in OnCommand System Manager: 1. Select the SVM. 2. Click Applications & Tiers. 3. Click Applications. 4. Click Add an Application. 5. In the “General Applications” Add NAS Container page, specify the following: application name size storage service level The floor and ceiling IOPS values adjust automatically based on space capacity used by the application.For a 1TB application with “Value” specified, the floor and ceiling start at 128 IOPS. As more space is used by the application the ceiling increases to a maximum of 512 IOPS. 6. Select NFS as the protocol used to access the application. 7. Set the host IP addresses that will access the application. After creation, details of the application components will be displayed in the System Manager summary. For more information, visit the ONTAP 9 Documentation Center.
... View more
Did you know you can…? Use FlexGroup QoS ceilings to limit the impacts of test workloads This recipe will show how to Use ONTAP QoS ceilings to limit the test workload both in IOPS and throughput in ONTAP 9.3. 1. Create a QoS policy with a ceiling of 500IOPS and 2 megabytes/sec (4K block size). ::> qos policy-group create -policy-group test_policy -max-throughput 500IOPS,2MB 2. Assign the test FlexGroup to the QoS policy ::> volume modify -volume test_flexgroup -qos-policy-group test_policy [Job 40] Job succeeded: volume modify succeeded 3. Check OnCommand System Manager to verify the test FlexGroup is limited by QoS and is not impacting the production workload. For more information, visit the ONTAP 9 Documentation Center.
... View more
ONTAP Recipes: Did you know you can…? Easily enable SAML Authentication for OCSM in ONTAP 9.3 Security Assertion Markup Language (SAML) 2.0 is a widely adopted industry standard that allows any third-party SAML-compliant identity provider (IdP) to perform Multifactor authentication (MFA) using mechanisms unique to the IdP of the enterprise’s choosing and as a source of single sign-on (SSO). There are three roles defined in the SAML specification: The Principal The IdP The Service Provider (SP) In the ONTAP 9.3 implementation, a principal is the cluster administrator gaining access to ONTAP through OnCommand System Manager (OCSM) or OnCommand Unified Manager (OCUM). The IdP is third-party IdP software from an organization such as Microsoft Active Directory Federated Services (ADFS) or the open-source Shibboleth IdP. The SP is the SAML capability built into ONTAP that is used by OCSM or the OCUM web application. Steps to enable SAML Authentication for OCSM in ONTAP 9.3: Open System Manager using the cluster management interface (DNS name or IP address). https://cluster-mgmt-LIF 2. Authenticate using administrator credentials. Click Configuration > Authentication 3. Select the Enable SAML Authentication checkbox. 4. Configure System Manager to use IdP authentication: Enter the URI of the IdP. Enter the DNS name or IP address of the host system. Optional: If required, change the host system certificate to a CA-signed certificate. 5. Click Retrieve Host Metadata to retrieve the host URI and host metadata information. 6. Copy the host URI or host metadata details. 7. Click Save 8. Click Save and Confirm. Ensure that you have copied the host URI or metadata to the IdP and done the trust configuration on the IdP server. (Refer to your IdP documentation.) The IdP login window is displayed. 9. Log in to System Manager by using the IdP login window. (You might see a prompt from the IdP stating that you are about to share specific attributes with the ONTAP cluster. You must allow sharing to occur for successful login.) After the SAML IdP authentication succeeds, the session has a lifetime configured in the IdP. For other service providers (SPs) that use the same IdP, this allows the authentication to exist within the session lifetime period. If OCUM is one of the SPs that uses the same IdP, access to OCUM is allowed without an additional authentication. Thus, single sign-on (SSO) is enabled. Steps to enable SAML Authentication for OCUM 7.3: Ensure that you have network connectivity between OCUM, the IdP, and OCUM web clients. 2. Launch the OCUM web GUI. 3. Authenticate using maintenance user credentials. 4. In the upper-right toolbar, click the gear icon and select Authentication in the left Setup menu. 5. If you haven’t enabled remote authentication, you must do so for SAML IdP users to have access to OCUM: Select the Enable Remote Authentication checkbox. Set the authentication service to Active Directory or OpenLDAP (Microsoft Lightweight Directory Services is not supported). Enter the administrator name and password. For AD, specify Base Distinguished Name; for LDAP, specify Bind Distinguished Name, Bind Password, and Base Distinguished Name. In the Authentication Servers section, enter the authentication server’s DNS name or IP address. Use Test Authentication to ensure that Remote Authentication Settings are operational. Navigate to the Settings > Management > Users Page and add users of type remote user or remote group with the OnCommand administrator role. 6. Navigate to Settings > Setup > Authentication > SAML Authentication Page. 7. Click View Host Metadata, copy the metadata into a file, and save it. This file will be used to configure OCUM in the IdP. 8. Select the Enable SAML Authentication checkbox, enter the IdP URL, and click Fetch IdP Metadata to populate OCUM with the IdP data. 9.Click Save and Yes in the warning dialog box. 10. Wait 5 minutes for the OCUM services to restart. 11. Configure the IdP (refer to your IdP documentation). Populate the IdP with the OCUM metadata from step 7. Add OCUM as a Relying Party. Add claim rules. Set Name to urn:oid:0.9.2342.19200300.100.1.1 and Unqualified Name to urn:oid:1.3.6.1.4.1.5923.1.5.1.1 . 12. Launch the OCUM web GUI and get redirected to the IdP for authentication 13. Authenticate using a remote user defined in step 5 above As in the OCSM section, after the SAML IdP authentication succeeds, the session has a lifetime configured in the IdP. For other SPs that use the same IdP, this allows the authentication to exist within the session lifetime period. If OCSM is one of the SPs that uses the same IdP, access to OCSM is allowed without an additional authentication after a successful OCUM authentication For more information, see the ONTAP 9 documentation center and the OCUM 7.3 documentation.
... View more
ONTAP Recipes: Did you know you can…? Easily Protect your FlexGroup volume in ONTAP 9.3 This recipe will show how to create a SnapMirror (or SnapVault/MirrorVault) relationship for FlexGroup volumes in ONTAP 9.3. 1. Open OnCommand System Manager. See if the volume is unprotected under “Volumes.” The volume we’re protecting is Tech_ONTAP (the Tech ONTAP podcast volume) Clicking on the + sign will show more details. 2. Click on “Configuration” in the left menu, then on “SVM Peers” (for local snapmirrors) or “Cluster Peers” (for remote snapmirrors) if you haven’t already peered the source and destination. In this example, we’re peering SVMs to do a local SnapMirror. 3. Peer the SVM or cluster. SVM Peering Click “Create” and choose your SVMs. Click “Initiate SVM peering.” Within a few seconds, you should see “SVM peering successful.” Click done. Cluster Peering Cluster peering is needed if you plan on implementing an intercluster snapmirror. In OnCommand System Manager for ONTAP 9.3, this is done with “Cluster Peers.” Doing this also allows you to peer SVMs in the same configuration steps. 4. Next, click on “Protection -> relationships” on the left menu. Then click “Create.” From here, go to the CLI – System Manager doesn’t currently support creating SnapMirror relationships for FlexGroup volumes. Create the destination volume. It must: - Be type DP - Have the same number of member volumes as the source FlexGroup - Be the same size or greater than the source FlexGroup The Tech_ONTAP FlexGroup has 8 member volumes and is 10TB in size. This is the destination volume created. The destination volume must match the source volume’s member volume count and must be as large or larger than the source. Otherwise, a “geometry” error will be seen: The FlexGroup spans a single aggregate and uses a multiplier of 8 to create 8 member volumes per aggregate. So, in this case, it’s a FlexGroup with 8 member volumes. 5. Decide on a snapmirror policy. You can apply a variety of policies to the mirror. For a DR SnapMirror, use “MirrorAllSnapshots.” If not specified, this is the default. SnapVault/MirrorVault would use MirrorAndVault. These are the specific policies and what they do: 6. Create the SnapMirror relationship. 7. Now, initialize the snapmirror. This will take some time, which will depend on the amount of data to transfer and the speed of your connection. While you can’t currently manage a FlexGroup SnapMirror from System Manager, you can view it: 8. To make the mirror a SnapVault relationship (keeping more snapshots than just the ones on the source), modify the relationship policy to MirrorAndVault. 9. Consider also using a snapshot policy and label for your volume. Note: Labels cannot be used with the async-mirror policy type. The policy rules support only two combinations of these labels, either just “sm_created” or both “sm_created” and “all_source_snapshots”. The label is used to define the set of Snapshot copies that you want backed up to the version-flexible SnapMirror secondary volume. Other Snapshot copies on the primary volume are ignored by the version-flexible SnapMirror relationship. Then, modify the volume to use the new policy: 10. Create a schedule to apply to the SnapMirror relationship using “job schedule create” and then apply it to your snapmirror. Once a volume is successfully SnapMirrored, System Manager will show it is protected: If using SnapMirror restore, keep in mind that the entire FlexGroup volume will get restored, not just files or single member volumes. For more information, see the ONTAP 9 documentation center
... View more
ONTAP Recipes: Did you know you can…? Easily convert (Inplace) a plaintext volume to encrypted in ONTAP 9.3 1) If the cluster is configured with Onboard/External Key-manager and VE is licensed , then use the below commands to convert an existing plaintext volume to encrypted >>volume encryption conversion start –volume <> -vserver <> This will start encryption scan on the volume and converts all existing data to encrypted form. New incoming data will be written in encrypted form also . 2) You can check the status of the conversion using >>volume encryption conversion status –volume <> -vserver <> For more information, see the ONTAP 9 documentation center
... View more
ONTAP Recipes: Did you know you can…? Easily Create a NAS Application Container in ONTAP 9.3 To create a NAS Application Container for use over NFS without compromising application or overall system performance, follow these steps in OnCommand System Manager: 1. Select the SVM 2. Click Applications & Tiers 3. Click Applications 4. Click Add an Application 5. In the “General Applications” Add NAS Container page, specify the following: The Application Name The size The storage service level 6. In ONTAP 9.3 with Adaptive QoS, the floor and ceiling IOPS values adjust automatically based on space capacity used by the application. There are 3 default Adaptive QoS policies in ONTAP (Extreme, Performance, Value) along with the ability to create custom policies. ::> qos adaptive-policy-group show Expected Peak Name Vserver Wklds IOPS IOPS ---- ------- ----- -------------- --------- extreme cluster 0 6144IOPS/TB 12288IOPS/TB performance cluster 0 2048IOPS/TB 4096IOPS/TB value cluster 0 128IOPS/TB 512IOPS/TB E.g: For a 1TB application with “Value” specified, the floor and ceiling start at 128 IOPS. As more space is used by the application the ceiling increases to a maximum of 512 IOPS. 7. Select NFS as the protocol used to access the application 8. Set the host IP addresses that will access the application After creation, details of the application components will be displayed in the System Manager summary For more information, see the ONTAP 9 documentation center
... View more