I do not see any showstoppers to the configuration that you had mentioned above. Keep in mind that SQL 2012 AlwaysOn AG is supported by only SMSQL 6.0 .
With this version of SMSQL 6.0 you have the Availability Group Database Backup option .This feature will allow you to back up AG databases across servers in a single operation and no need to take
backups on each individual node.
The backup job is assigned to the SQL Server agent on the driving server or from the host from where the SMSQL wizard launches the backup . Also keep in mind to have the secondary replicas Readable Secondary property for the replica to Yes.
There were few points which I had discussed in TR 4003
AlwaysOn introduces the following enhancements:
Provides enhanced failover functionality using Windows Server Failover Clustering (WSFC)
Eliminates the requirement for shared storage and removes the single point of failure.
Offers multiple availability databases, compared to database mirroring.
Features AlwaysOn listener service, which accepts all requests to database availability groups .
Can expand over multiple subnets
Gives the option to offload backup and maintenance operations to a read-only replica database
Offers support for both synchronous and asynchronous nodes
Using this version of SnapManager 6.0, users can:
Create Snapshot based full backup and transaction log backup on SQL Server 2012 user databases
Create stream-based full backup and transaction log backup on SQL Server 2012 system databases (except tempdb)
Perform restore on SQL Server 2012 databases, including restoring a Snapshot full backup and applying transaction log backups
Perform up-to-the-minute restore, point-in-time restore, and restore-to-mark restore .
Create federated full backup and federated log backup with SQL Server 2012 databases included in one or more federated groups along with databases from other versions of SQL Server instances .
Create clone databases on SQL Server 2012 instances.
Automatically refresh the clone database on a SQL Server 2012 instance
Automatically delete the cloned database on a SQL Server 2012 instance when it is no longer needed
I would also recommend you to check IMT for the latest supported configurations for Vsphere,VSC ,SDW and other softwares for SQL 2012 AG on VMware.
There may be some changes with rolling SnapShots. If you are using SQL 2012 AGs, t-log dumps are no longer placed in the SnapInfo directory. In order for any of the replicas to be recovered up to the minute all SQL instances with a replica need to have access to the logs. To accomodate this, SMSQL 6.0 requires a CIFS share to be made available as a t-log repository. This CIFS share does not necessarily need to be on a NetApp volume and therefore it is no longer automatically covered by SnapShot protection in SMSQL. It is recommended, however, that you specify a location that can be backed up with SnapShots and SnapMirrored to a specified destination using scripts and the run-after command associated with a backup job. If you are not using AGs, then I believe you can continue using the SnapInfo directory as normal and the rolling snapshot process should not change.
So basically in the scenario described in my original post (with local AlwaysOn AG), I can't use SnapDrive rolling snapshots, as T-Logs are placed in a CIFS share and not in a LUN. The volume with CIFS share still can be snapshotted & mirrored every 15 minutes by setting up a schedule on the filer, rather than via SnapDrive, but deleting snapshots would require some script though.
Is it described properly anywhere? Any additional comments / thoughts?
Assuming you use a volume on NetApp and have CIFS licensed, we recommend using the "run after" option associated with SMSQL jobs to update the snapmirror relationship of the volume containing the transaction logs. Setting a schedule on the filer will maintain the snapmirror, but it may kick off while a dump is in progress and potentially give you an incomplete dump file on the destination. Using the run after command to call a script that updates the snapmirror will ensure that the log dump is completed when the mirror kicks off. It will also ensure you have the most current log in your DR site. As for retention, you should not need to take snapshots of the volume like you would a LUN. If the t-logs are there when the snapmirror kicks off, they will be mirrored. T-Log retention can be managed through SMSQL log retention policies and not necessarily via snapshot retention.
Alternatively, there is nothing stopping you from creating a LUN on your SQL Server and sharing that drive letter out for the log repository CIFS Share. The down side is that if that server goes down, you lose the share and access to the logs until you mount the lun up again somewhere else.
I have not seen any real documentation around t-log replication best practices with the new layout in SMSQL 6.0. I will pass the word up asking for the documents to be update.
we recommend using the "run after" option associated with SMSQL jobs to update the snapmirror relationship of the volume containing the transaction logs.
Hmm, yes understood. But I was asking about reducing RPO by replicating recovery points *more* frequently than SMSQL jobs - like with SnapDrive rolling snapshots, which run every 15 minutes, whilst the SMSQL backup is done only every 4 hours (example described in the TR-4003)
I am assuming you schedule your transaction log dumps using SMSQL. This is the job that would have the script associated with it (as opposed to your full database backup) and it would run as frequent as your t-log dumps happen. Just make sure that your t-log snapmirror finishes prior to the next log dump beginning. I would suggest dedicating a volume to just the logs so that the snapmirror does not need to send un-necessary data.
Thanks, Justin. The possibility of SnapMirroring the t-log CIFS repository across sites was my main question, but now I'm interested in what that looks like from a DR perspective. Outside of breaking the SnapMirror and making sure the CIFS share permissions are accurate, does one simply re-run the SMSQL Configuration wizard at the DR site and only change the t-log repository location before a restore, or are there more steps to consider? Let's assume a secondary replica has been at the DR site the whole time. So data and SnapInfo LUNs are already there, plus there's the SnapMirrored t-log CIFS share. Then what?