ONTAP Discussions

Need some help/suggestions setting up OnTap S3

Stormont
3,003 Views

We have licensing for OnTap S3.  We have 2 clusters (both running 9.8P2 and both consisting of 2 x AFF8040s and 2 x FAS8200s).  I have been told that we can tier cold data from the AFF nodes to the FAS nodes and was directed to https://docs.netapp.com/ontap-9/topic/com.netapp.doc.pow-s3-cg/S3%20configuration.pdf and  https://www.netapp.com/pdf.html?item=/media/17219-tr4814pdf.pdf

 

We only have one SVM on each cluster.  The configuration guide in the first link above says that a new SVM isn't needed, but "you might choose to create a new SVM if one of the following is true: You are enabling S3 on a cluster for the first time".  Given we are enabling S3 for the first time, should we really make a new SVM for it?

 

For the features that we want to use, is a dedicated LIF needed?  If not and we can/decide to use the same SVM, we just need to do the following?
Create a certificate request and sign it by a CA

Go to Storage VMs, click on out existing SVM, click on the gear icon in the S3 section

Give the server a name

Paste the cert and private key

Select "Use the same network interface that is already configured for the SMB/CIFS and NFS protocols" and then pick an interface.

Create a bucket and tier to it?

 

Another thing we do want to do is to tier snapshots to an AWS instance so that we have them far away from us.  Do we then need a Amazon S3 or Cloud Tiering Service license?  Given that goal plus the potential of using other S3 features locally, would we be hindering ourselves by adding S3 to an existing interface as mentioned above (and we really should just create a dedicated interface for S3)?

 

 

 

 

3 REPLIES 3

scottgelb
2,986 Views

I would create a new S3 SVM with a data LIF for S3. You can share an existing SVM and the S3 bucket (a FlexGroup) is created and would be separate from the NAS volumes. There is no multi-protocol between NAS and S3. For the AFF connection, the InterCluster LIFs will be used to connect to the target S3 LIF. I have to check the docs, but 300TB of ONTAP S3 is supported for the FabricPool, and you don't need any licensing to use the feature. If over 300TB, then StorageGRID is recommended, also with no license needed for FabricPool since a NetApp destination.

 

Similarly, if you also want to tier to AWS, InterCluster LIFs will be used. To use AWS as a FabricPool destination, you need a license per TB.  You could also setup the FabricPool mirror feature to use both the ONTAP S3 and AWS S3 to utilize both.

 

There is also the Cloud Backup Service (CBS) that will replicate a volume (snapshots) to the S3 destination with a catalog.  Cloud Manager (CM) fully integrates the on-prem and cloud components. This will be a backup to cloud where the AWS S3 for FabricPool (or Cloud Tiering) would not be a backup of the data. 

 

So many great options to check.. If I saw this post a year ago, I would not recognize the new NetApp here.  Please reply back business requirements or any questions and the community will have opinions on different options.  For example, you could use your ONTAP S3 for tiering SSD to HDD, then use CBS to back up to AWS S3.

scottgelb
2,981 Views

I posted 2 blog posts that might be useful.. the first is S3 SVM setup on one cluster with a dedicated LIF, and FabricPool from another cluster to the S3 bucket. The posts cover certificate creation in the setup which can be confusing but check the S3 blog to the FabricPool blog to see all the steps. I also show the object store mirror feature with 2x ONTAP S3 buckets, but the second bucket could be AWS as long as you have the licenses on the AFF to connect (per TB). Let me know if you have any questions on the setup.

 

S3 Setup

https://storageexorcist.wordpress.com/2020/11/04/netapp-ontap-9-8-s3-is-ga/

 

FabricPool to the S3 bucket (from a different cluster)

https://storageexorcist.wordpress.com/2020/11/04/netapp-ontap-9-8-fabricpool-tiering-to-ontap-s3/

keitha
2,961 Views

One thing that you mentioned that gave me pause. Tiering snapshots to AWS to they are far from you. So tiering the snapshots saves you space but does not give you any extra protection. The Snapshot in AWS would be useless to you if you lost the primary system.

 

Instead what you will want to do is set up a CVO instance in AWS, do a SnapMirror of the volume to the CVO instance and then tier from the CVO instance into S3. That will get you your second copy of the data away from you if you were to lose the whole primary site.

Public