Hello,
So I pulled the thread with our account engineer and researched some of this further. In the end, it sounds like some combination of backups is in order. The summary breakdown of protection methodologies (per our engineer) is:
SVM DR (protects config and user data)
This is for DR. It replicates the everything contained by and in the SVM, including user data. Exception is SAN configuration data.
LS mirrors (protects GNS, not configuration, not user data (assuming only writing to joined volumes))
This protects the GNS becoming unavailable.
System configuration commands (protects config, not user data)
This protects a situation where you lose an SVM. An example would be a deleted SVM.
Another benefit is if you want to recreate or copy a configuration on a new node or cluster.
It could also be used in conjunction with SM/SV to bring a backup/mirror online.
Since this is a stand-alone cluster, SVM DR is out for you unless you wanted to try to protect against a single aggregate failure by mirroring within the same cluster.
In the end, if you're looking to get this set up so that it can be "restored from bare metal", you're going to need to perform the system configuration backup and "somehow" get your SVM root volumes backed up. Step one (after a disaster and rebuild of the cluster) would be performing the configuration backup recovery procedure:
https://library.netapp.com/ecmdocs/ECMP1196798/html/GUID-2C1339FE-3848-4CDC-A57D-28FB761F9C50.html
Quick blurb on the configuration backup content:
Cluster configuration backup file
These files include an archive of all of the node configuration backup files in the cluster, plus the replicated cluster configuration information (the replicated database, or RDB file). Cluster configuration backup files enable you to restore the configuration of the entire cluster, or of any node in the cluster. The cluster configuration backup schedules create these files automatically and store them on several nodes in the cluster.
What I'm still noodling on is once you have a "restored" cluster, how do you rebuild the data SVMs from a backed-up SVM root volume. The closest I could find was "Promoting a data-protection mirror copy" (vs LS mirrored):
https://library.netapp.com/ecmdocs/ECMP1636068/html/GUID-920236C3-E1AE-46D3-A93D-5E522A3418C0.html
Ignore the mirror quiesce/break since you'd be recoverying from another source - so then you'd just be issuing the following:
volume make-vsroot -volume vol_dstdp -vserver vs1.example.com
Then you could restore user data from ??? and hopefully be underway.
We've got a similar use case in one environment where we don't have another NetApp cluster to mirror to. If we figure out a solution I'll try to double-back with what we sleuthed out...
Hope that helps,
Chris