Tech ONTAP Blogs

Data Protection SQL Server Always On FCIs with FSx for NetApp ONTAP and SnapMirror

carine
NetApp
521 Views

Step -by- step deployment of SQL Server 2022 Multi-Subnet Cluster on Windows Server 2019 using AWS FSx for NetApp ONTAP

 

Introduction

Configuring a SQL Server multi-subnet failover cluster with 4 nodes in different availability zones involves using two subnets in different availability zones acting as the active nodes and two nodes in different regions acting as the passive nodes or DR nodes. This configuration provides both high availability and disaster recovery solutions with SQL Server 2022. It helps customers save on licensing costs, and in the event of a failover, it provides instance-level protection for your database, as SQL Server Agent jobs, certificates, and server login, which are stored in your system databases and physically stored in the shared storage, are moved.

 

Assumptions

  • Windows Server 2019 EC2 instances are configured and added to the Active Directory domain.
  • FSx for NetApp ONTAP has been deployed in both regions.
  • VPC peering has been established between both regions.
  • Windows failover cluster features have been added to all EC2 instances.
  • Windows MPIO features have been added and configured on all EC2 instances.
  • Quorum will not be used for this demonstration (the best practice is to always have a quorum configured).

 

Network Architecture Design

carine_0-1748614902813.png

 

Work with your storage administrator and network architect to ensure everything is properly configured for high availability and disaster recovery (DR).

 

carine_0-1749049697277.png

 

 

 

FSx for NetApp ONTAP File system

Creating Read/Write (RW) Volumes

 

carine_2-1748615347861.png

 

To create the volumes, these are the properties needed to proceed:

  1. Select the file system and choose "Create Volume"
  2. Select the virtual machine from the drop-down menu if multiple SVMs are available ( trisqlva)
    Provide the following details for the volume:
    • Volume name: data
    • Volume style: flexVol
    • Volume size: 20 GB
    • Volume type: Read-Write (RW)
    • Junction path: /data
    • Storage efficiency: enabled
    • Volume security style: NTFS
    • Snapshot policy: default

 

carine_3-1748615503645.png

 

In the Storage tiering section, set the following:

  • Capacity pool tiering policy: none
  • Tiering policy cooling period: 31
    Click on create.

 

carine_4-1748615579939.png

 

Repeat the process to create all the other volumes in the Oregon region. Once completed, all volumes should be successfully created.

 

carine_5-1748615663087.png

 

 
Creating Data Protection (DP) volumes in Virgina region

Select the file system to create the volumes.

 

carine_6-1748615812135.png

 

1. Select "Create Volume".
2. Select the storage virtual machine from the drop-down menu if multiple SVMs are available (virgsva).
3. Provide the following details:
o Volume name: data_cp
o Volume style: flexVol
o Volume size: 20 GB
o Volume type: Data Protection (DP)

 

carine_7-1748615903504.png


In the Storage tiering section set the following:
o Capacity pool tiering policy: none
click on create.

 

carine_8-1748615969469.png

 

Repeat the process to create all the other volumes in the Virginia region. Once completed, all volumes should be successfully created.

 

carine_9-1748616060907.png

 

Configurating Multipathing and connecting Targeted Port.
To create the LUNs, let's map the iSCSI endpoint to the various nodes. To proceed, follow these steps:
1. Open the Server Manager.
2. Under the "Tools" menu, select "iSCSI Initiator".

 

carine_10-1748616179387.png

 

3. When opening the iSCSI Initiator for the first time, it will display a warning that iSCSI is not running. Click "Yes" to start the iSCSI service and then reopen the iSCSI Initiator.

 

carine_11-1748616246442.png

 

 

4. In the iSCSI Initiator, Move to the "iSCSI Properties" tab.
5. Click on the "Discovery" tab.
6. Select "Discover Portal".

 

carine_12-1748616317403.png

 

 

7. Enter the iSCSI IP address of the file system.

 

carine_13-1748616350194.png

 

8. Copy the ISCSI endpoint from the file system.

 

carine_14-1748616390510.png

 

 

9. Enter the iSCSI IP address of the file system.

 

carine_15-1748616460652.png

 

 

10. Click on "Advanced".
11. From the "Local adapter" drop-down menu, select "Microsoft iSCSI Initiator".
12. From the "Initiator IP" drop-down menu, select the appropriate IP address.
13. Click "OK".

 

carine_16-1748616539610.png

 

14. Repeat the same process with the second IP address.

 

Follow the same steps on the DR nodes.
1. In the iSCSI Initiator, Move to the "iSCSI Properties" tab.
2. Click on the "Discovery" tab.
3. Select "Discover Portal".
4. Copy the iSCSI endpoint from the file system.
5. Enter the iSCSI IP address of the file system.
6. Click on "Advanced".
7. From the "Local adapter" drop-down menu, select "Microsoft iSCSI Initiator".
8. From the "Initiator IP" drop-down menu, select the appropriate IP address.
9. Click "OK".
10. Repeat the same process with the second IP address.

 

carine_17-1748616775157.png

 

All iSCSI IP addresses are now added.

 

carine_18-1748616850457.png

 

 

1. Click on the "Targets" tab.
2. The initiators should be present. Now, let's establish the connection.

 

carine_19-1748616907406.png

 

3. Select one of the inactive initiators and click on "Connect".

 

carine_20-1748616959903.png

 

4. A window will pop open. Click on "Enable" and then "Advanced".
5. From the "Local adapter" drop-down menu, select "Microsoft iSCSI Initiator".
6. From the "Initiator IP" drop-down menu, select the appropriate IP address.
7. From the "Target portal IP" drop-down menu, select one of the iSCSI IP addresses
8. Click "OK".

 

carine_21-1748620314346.png

 

9. Repeat the same process with the second iSCSI IP address and click "OK".

 

carine_22-1748620381353.png

 

10. Both iSCSI IP addresses are now connected.

 

carine_23-1748620444733.png

 

11. Repeat the same process on all 4 nodes, connecting the iSCSI initiator.


• Creating the LUNs
Luns are created using CLI. GUI version not available.
Copy the administrator IP from the file system and log in to any CLI of choice.
Here is the script to create the RW LUN on the active site. (Oregon)

 

1. LUNs are created using the CLI as the GUI version is not available.
2. Copy the administrator IP from the file system. (TriSubSQL)
3. Log in to any CLI of your choice.
4. Use the following script to create the RW LUN on the active site (Oregon):

lun create -vserver trisqlva -path /vol/data/VoltriData -size 15gb -ostype windows_gpt -space-allocation enabled
lun create -vserver trisqlva -path /vol/log/Voltrilog -size 15gb -ostype windows_gpt -space-allocation enabled
lun create -vserver trisqlva -path /vol/logdirectory/VoltriSNAPLOG -size 15gb -ostype windows_gpt -space-allocation enabled

 

5. Use the lun show command to verify that the LUN has been created. Luns are created but are currently unmapped.

 

carine_0-1748622512385.png

 

6. Script to script a LUN igroup and adding the igroup


lun igroup create -vserver trisqlva -igroup sqlfcisub -initiator iqn.1991-05.com.microsoft:winfcisqloren01.nimorg.com -protocol iscsi -ostype windows
lun igroup create -vserver trisqlva -igroup sqlfcisub -initiator iqn.1991-05.com.microsoft:winfcisqloren02.nimorg.com -protocol iscsi -ostype windows

 

igroup add -vserver trisqlva -igroup sqlfcisub -initiator iqn.1991-05.com.microsoft:trisqlfcoren01.nimorg.com
igroup add -vserver trisqlva -igroup sqlfcisub-initiator iqn.1991-05.com.microsoft:trisqlfcoren02.nimorg.com

 

7. The igroup is fully configured.

 

carine_1-1748622754249.png

 

8. Mapping the LUNs
9. Here is the script to map the LUNS.

lun mapping create -vserver trisqlva -path /vol/data/VoltriData sqlfcisub -lun-id 1
lun mapping create -vserver trisqlva -path /vol/log/Voltrilog sqlfcisub -lun-id 2
lun mapping create -vserver trisqlva -path /vol/logdirectory/VoltriSNAPLOG sqlfcisub -lun-id 3

Cli version of mapped LUNs.

 

carine_2-1748622849627.png

 

 

Bringing the disks Online
1. Select any active node. (WinFCISQLoreN01)
2. Click on the Windows icon.
3. Select "Disk Management".

 

carine_3-1748622934479.png

 

4. In Disk Management, click on "Action" in the menu.
5. Select "Rescan Disks" to pull in the new disks.

 

carine_4-1748623012040.png

 

6. Right-click on disk 1 to bring online.
7. Select "Online" to bring the disk online.

 

carine_0-1748625730447.png

 

 

8. Right-click again on Disk 1.
9. Select "Initialize Disk".

 

carine_1-1748625789603.png

 


10. A pop-up window will open. Select "GPT (GUID Partition Table)" and click "OK".

 

carine_2-1748625868947.png

 

 

11. Click on the empty space of the disk.
12. Select "New Simple Volume".

 

carine_3-1748625930700.png

 

13. The New Simple Volume Wizard will open. Click "Next".

 

carine_4-1748625997852.png

 

14. The volume size is picked up by default. Click "Next".

 

carine_5-1748626060456.png

 

15. The driver’s letter is picked up by default but can be changed from the drop-down menu. Click "Next".

 

carine_6-1748626141593.png

 

16. Provide the volume name and click “Next”

 

carine_7-1748626200248.png

 

 

17. Click "Finish" to complete the disk setup.

 

carine_8-1748626242374.png

 

18. Repeat the same process for the other two disks.
All disks are completely set up.

 

carine_9-1748626299927.png

 

19. Open File Explorer.
20. Click on "This PC" to validate the presence of the disks.

 

carine_10-1748626369823.png

 

21. Move to node two. (WinFCISQLoreN02)
22. Click on the Windows icon.
23. Select "Disk Management".
24. In Disk Management, click on "Action" in the menu.
25. Choose "Rescan Disks" to bring the disks online.
26. The disk mode should be offline since it is a shared disk.

 

carine_11-1748626467712.png


Creating the DP Lun and SnapMirror relationship.
SnapMirror relationship is created using CLI. Follow these steps:
1. Copy the administrator IP of the DR file system. (subsqlfsx)
2. Log in to any CLI of your choice.
3. Use the following script to create a SnapMirror relationship between the source and the destination volume:

snapmirror policy create -vserver virgsva -policy sqldr -type async-mirror
snapmirror create -source-path trisqlva:data -destination-path virgsva:data_cp -type xdp -policy sqldr -schedule 10min
snapmirror create -source-path trisqlva:log -destination-path virgsva:log_cp -type xdp -policy sqldr -schedule 10min
snapmirror create -source-path trisqlva: logdirectory -destination-path virgsva: logdirectory_cp -type xdp -policy sqldr -schedule 10min

4. The SnapMirror relationship has been created.

 

carine_0-1748873547583.png

 

 

5. Next, initialize the SnapMirror relationship using the following command:

snapmirror initialize -destination-path virgsva:log_cp -source-path trisqlva:log
snapmirror initialize -destination-path virgsva:data_cp -source-path trisqlva:data
snapmirror initialize -destination-path virgsva: logdirectory_cp -source-path trisqlva:logdirectory


6. The SnapMirror relationship is fully initialized.

 

carine_1-1748873631358.png

 

 

7. Use the following script to create an igroup and add initiators to it:
lun igroup create -vserver virgsva -igroup subfcivir -initiator iqn.1991-05.com.microsoft:winfcisqlvirn01.nimorg.com -protocol iscsi -ostype windows
lun igroup create -vserver virgsva -igroup subfcivir -initiator iqn.1991-05.com.microsoft:winfcisqlvirn02.nimorg.com -protocol iscsi -ostype windows

 

igroup add -vserver virgsva -igroup subfcivir -initiator iqn.1991-05.com.microsoft:winfcisqlvirn01.nimorg.com
igroup add -vserver virgsva -igroup subfcivir -initiator iqn.1991-05.com.microsoft:winfcisqlvirn02.nimorg.com

 

8. The igroup has been fully added to the volumes.

 

carine_2-1748873725851.png

 

Mapping the volumes to the DR node.
9. Use the following script to map the volumes to the DR node:




lun mapping create -vserver virgsva -path /vol/data_cp/VoltriData subfcivir -lun-id 1
lun mapping create -vserver virgsva -path /vol/log_cp/Voltrilog subfcivir -lun-id 2
lun mapping create -vserver virgsva -path /vol/logdirectory_cp/VoltriSNAPLOG subfcivir -lun-id 3

 

10. The LUN is fully mapped to the DR node.

 

carine_3-1748874626749.png

 

11. Move over to any of the DR nodes (WinFCISQLvirN01)
12. Click on the Windows icon.
13. Right-click on the Windows icon.
14. Select "Disk Management".

 

carine_4-1748874689034.png

 

15. In Disk Management, click on "Action" in the menu.
16. Select "Rescan Disks" to pull up the disks in the system.
17. The disks should be online.

 

carine_5-1748874753305.png

 

 

18. Right-click on Disk 1.
19. Select "Online" to bring the disk online.

 

carine_6-1748874802041.png

 

20. Repeat the same process to bring all the other disks online.

 

carine_7-1748874857302.png

 

21. Repeat the same process on the second DR node. (WinFCISQLvirN02)
22. The volumes should all be online.
23. Verify that the volumes are present in File Explorer.

 

carine_8-1748874901883.png

 

All volumes are completely set up on all four nodes, both the Active nodes (Oregon) and the DR nodes (Virginia).

 

Validating the Cluster

Cluster validation is a crucial step in ensuring the reliability, performance, and availability of a cluster setup. Cluster validation checks that all nodes in the cluster are configured consistently. This includes network settings, storage configurations, and software versions. Validation ensures that storage is correctly configured and accessible by all nodes in the cluster. This includes verifying that shared storage is properly set up and that there are no issues with disk access or performance.


To validate the cluster, follow these steps:
1. Open the Server Manager.
2. Click on "Tools" in the menu.
3. Select "Failover Cluster Manager".

 

carine_9-1748875060690.png

 

4. On the upper right side or in the middle of the Failover Cluster Manager, select "Validate Configuration".

 

carine_10-1748875114372.png

 

5. The "Validate a Configuration Wizard" will open. Click "Next".

 

carine_11-1748876383586.png

 

6. Browse for all the nodes that will be part of the cluster and add them.

 

carine_12-1748876426472.png

 

7. Once all the 4 nodes are added to the cluster, click "Next".

 

carine_13-1748876465936.png

 

8. Select "Run all tests (recommended)" and click "Next".

 

carine_14-1748876514102.png

 

9. On the confirmation page, it will list all the tests to be run. Click "Next".

 

carine_15-1748876562356.png

 

10. Allow the tests to run to completion.

 

carine_16-1748876619056.png

 

11. The cluster validation tests have been completed.

carine_17-1748876675214.png

 


Creating the Cluster

1. The cluster validation tests have been completed.
2. After a successful cluster validation, check the box "Create the cluster now using the validated nodes".
3. Click the "Finish" button.

 

carine_18-1748876743005.png

 

4. The "Create Cluster Wizard" will open. Click "Next".

 

carine_19-1748876802240.png

 

5. Enter the cluster name following your company's naming conventions.

 

carine_20-1748876832480.png

 

6. Uncheck the box "Add all eligible storage to the cluster”. Click "Next".

 

carine_21-1748876878191.png

 


Click "Next".

carine_22-1748876921541.png

 

7. The cluster is now being created. Allow it to run.
8. Once the process is complete, click "Finish".

 

carine_23-1748876954287.png

 

 

Configuring the Cluster

Cluster configuration involves adding a static IP, adding storage, and configuring a quorum. To add the cluster IP, follow these steps:
1. Log into your AWS console.
2. Select the node (WinFCISQLoreN01)
3. Click on "Actions"
4. Select "Networking"

 

carine_24-1748877164071.png

 

5. Select "Manage IP addresses"

 

carine_25-1748877223251.png

 

6. Extend eth0 and click "Assign new IP address".
7. Enter the IP address following the subnet addressing. We are going to assign two IP addresses:
1. One IP for the Cluster.
2. The second IP for SQL Server installation.
8. Check the box "Allow secondary private IPv4 addresses to be assigned".
9. Click "Save".

 

carine_26-1748877328032.png

 

10. Click "Confirm".

 

carine_27-1748877368948.png

 

11. On the "Networking" tab, you can see the secondary IP addresses.

 

carine_28-1748877405721.png

 

12. Repeat the same process on all nodes in the cluster.
13. Go back to your Windows Cluster Manager.
14. The cluster will be in a failed state, right-click on the IP address and select "Properties".

 

carine_29-1748877608921.png

 

carine_30-1748877647928.png

 

15. In the properties window, check the box "Static IP address" and enter the secondary IP address generated on the EC2.

 

carine_31-1748878651800.png

 

16. Click "Apply".
17. Select the "Advanced Policies" tab.
18. Uncheck all the other nodes.
19. Click "Apply" and then "OK".
.

carine_32-1748878694888.png

 

20. Repeat the same procedure for the other 3 nodes.
21. All IP addresses are added.

 

carine_0-1748878933480.png

 

22. Click on the cluster (ClusSQLWinFCI) to bring the cluster up.

 

carine_1-1748878977871.png

 

The cluster is online.

carine_2-1748879025207.png

 

Adding the Storage DISK

To add the disk to the cluster, follow these steps:
1. Ensure the cluster is online.
2. In the Failover Cluster Manager, click on "Disks".
3. At the upper right, select "Add Disk".

 

carine_3-1748879093019.png

 

4. All the available disks will be listed. Click "OK".

 

carine_4-1748879368373.png

 

5. The disks are available online.

 

carine_5-1748879402350.png

 

6. To rename the disks from "Cluster Disk", follow these steps:
o Ensure the disks are available and online.
o In the Failover Cluster Manager, right-click on the disk you want to rename.
o Select "Properties".
o Enter the new name for the disk.
o Click "Apply" and then "OK".
o Repeat the same procedure for the other disks.

 

carine_6-1748879470169.png

 

7. All disks are properly configured.

 

carine_7-1748879505192.png

 

The cluster is properly configured. The next step is to install SQL Server.


Creating the SQL Server computer object and giving it full Permission

To create a computer object for SQL Server, follow these steps:
1. Open Server Manager.
2. Click on "Tools" in the menu.
3. Select "Active Directory Users and Computers".
4. Right-click on the "Computers" folder.
5. Select "New" and then choose "Computer" from the menu.

 

carine_8-1748879719364.png

 

6. Enter the computer name following your company's naming conventions and best practices.
Click "OK" to finish.

 

carine_9-1748879778024.png

 


7. Right-click on the domain name.
8. Choose "Delegate Control".

 

carine_10-1748879822478.png

 

9. The "Delegation of Control Wizard" will open. Click "Next".

 

carine_11-1748879945704.png

 

Click "Add".

 

carine_12-1748879986105.png


10. Select "Object Types".

 

carine_13-1748880178926.png

 

11. Check the box for "Computers" and uncheck the other objects. Click "OK"
.

carine_14-1748880224585.png

 

12. Enter the SQL Server computer object. Click "OK".

carine_15-1748880319134.png

 

 

13. With the computer object selected, click "Next".

carine_16-1748880364006.png

 

 

14. Check the box "Create a custom task to delegate”. Click "Next".

carine_17-1748880412383.png

 

15. Check the box "Only the following objects in the folder".
16. Check the boxes for "Computer objects", "Create selected objects in this folder", and "Delete selected objects in this folder". Click "Next".

 

carine_18-1748880558944.png

 

17. Check the box "Full Control". The other boxes will automatically get checked. Click "Next".

 

carine_19-1748880618818.png

 

Click on finish

 

carine_20-1748880658528.png

 

18. To give the cluster full permission to all four nodes, right-click on the cluster computer object in Active Directory.
19. Select "Properties".

 

carine_21-1748880734452.png


20. Select the "Security" tab. Click "Add" to add all four nodes in the cluster.

 

carine_22-1748880824414.png

 

21. Select all the nodes in the cluster and the service account.
22. Give full permission by checking the "Full Control" box.
23. Click "Apply" and then "OK".

 

carine_23-1748880875228.png

 

Now that our cluster and computer object have full permissions, we can proceed with installing SQL Server.


Installing SQL Server on all nodes

To install SQL Server, follow these steps:
1. Open the media file on your computer.
2. Select "Setup".

 

carine_0-1748882014222.png

 

3. In the SQL Server Installation Center, select "Installation".
4. Click on "New SQL Server failover cluster installation".

 

carine_1-1748882074700.png

 

5. Allow the setup to run.
6. In the "Edition" tab, enter your edition and product key. Click "Next".

 

carine_2-1748882124545.png

 

7. On the "License Terms" page, accept the terms. Click "Next".

carine_3-1748882170698.png

 

 

8. On the "Microsoft Update" page, click "Next".

 

carine_4-1748882225064.png

 

9. Click "Next" on the "Install Setup Files" page and allow the installation of Failover Cluster Rules. If there are no failures, proceed with the setup. If there are failures, fix them and then proceed.
Note: If you did not run cluster validation, it will fail at this point.

 

carine_5-1748882290207.png

 

10. Select the key features to install.

carine_6-1748882329916.png

 

 

11. On the "Instance Configuration" page, enter the SQL Server computer object created in Active Directory with full permission.
12. Select "Named instance" and enter the instance name following your company's naming conventions.

 

carine_7-1748882385895.png

 

13. On the "Cluster Resource Group" page, click "Next".

 

carine_8-1748882435435.png

 

14. Select all the disks to be added and click "Next".

 

carine_9-1748882483907.png

 

15. Enter the Cluster Network Configuration IP address. This IP was generated on the EC2 node. Click "Next".

 

carine_10-1748882824518.png

 

16. On the "Server Configuration" page, enter the SQL Server account name and password with full permission.
17. Click on the check box "Grant Perform Volume Maintenance Task privilege to SQL Server Engine Service". Click "Next".

 

carine_11-1748882934991.png

 

18. On the "Database Engine Configuration" page, click "Add Current User".
19. Check the box "Mixed Mode" and provide the password. Click "Next".

carine_12-1748883016739.png

 

 

20. Enter the required information. Click "Next"

 

carine_13-1748883077093.png

 

21. Click on the "Data Directories" tab.
22. Select the disk created for the data directories. Click "Next".

 

carine_14-1748883149136.png

 

23. On the "Ready to Install" page, review the summary of all the features that will be installed.

 

carine_15-1748883190355.png

 

24. Allow the installation to run to completion.

 

carine_16-1748883251632.png

 

The installation is completed.

 

carine_17-1748883483000.pngTo install SQL Server on the second node, follow these steps:
1. Open the media file on the second node.
2. Select "Setup".
3. In the SQL Server Installation Center, select "Installation".
4. Choose "Add a node to a SQL Server failover cluster".

 

carine_18-1748883565192.png

 

5. Allow the setup process to run.
6. Select "Evaluation" and then click "Next".

 

carine_19-1748883622116.png

 

7. Check the box to accept the License Terms and then click "Next".

 

carine_20-1748883700207.png


8. Click "Next" on the "Microsoft Update" page.
9. Click "Next" on the "Product Updates" page.
10. Allow the setup files to run, checking for potential problems.

 

carine_21-1748883756410.png

 

11. The cluster node is selected. Click "Next".

 

carine_22-1748883818313.png

 

12. Uncheck the box for DHCP.
13. Check the box for IPv4.
14. Enter the secondary IP address belonging to the node. Click "Next".

carine_23-1748883881277.png

 

15. Click "Yes" on the pop-up window.

 

carine_24-1748883944099.png

 

16. Enter the password for the service account.
17. Check the box "Grant Perform Volume Maintenance Task privilege to SQL Server Engine Service". Click "Next".

 

carine_25-1748883994496.png


18. On the "Ready to Add Node" page, click "Install".

 

carine_26-1748884042717.png

 

19. Allow the installation to run to completion.

 

carine_27-1748884081409.png

 

Repeat step 1-13 on the third node

Enter the SQL Server IP address (WinFCISQLoreN02)

 

carine_28-1748884192067.png

 

Repeat the same process on the fourth node (WinFCISQLvirN02)

carine_29-1748884267517.png

 

We have completed the installation of SQL Server.


Installing SQL Server Management studio

To install SQL Server Management Studio (SSMS), follow these steps:
1. Locate the media file for SQL Server Management Studio on your computer.
2. Open the media file.
3. Click on "Install"

carine_30-1748884367125.png

 

4. Allow the installation to run to completion.

 

carine_31-1748884416143.png

 

5. The setup is completed.

carine_32-1748884453023.png

 


Performing Failover of SQL Server to DR Region (Virginia)

To perform a failover, we need to pause or stop the two active nodes (WinFCISQLoreN01 & WinFCISQLoreN02) in Oregon,
The state of the FCI cluster before the failover.

Cluster management IP all active.

carine_33-1748884551229.png

 

Status of the Cluster role (Running)

 

carine_34-1748885443961.png

 

All SQL Server Nodes are running.

 

carine_35-1748885485691.png

 

All the disk/Volumes are online.

carine_0-1748886248571.png

 

Checking the status of SnapMirror relationship

carine_1-1748886289927.png

 

Deleting 3 tables on the database (DemoDB) before failover. (dbo. OrderItems, dbo.orders & dbo. Products)
DemoDb Database before failover.

 

carine_2-1748886429130.png

 



State of DemoDb database after tables are deleted

carine_3-1748886482166.png

 

Putting the Active Nodes on paused. (WinFCISQLoreN01 & WinFCISQLoreN02)

 

carine_4-1748886523512.png

 

 

State of the SQL Server role after failover: Failed

 

carine_5-1748886606533.png

 

Breaking the SnapMirror Relationship between Active Region and DR Region.
1. Log in to the CLI of the DR File System: subsqlfsx
o Copy the administrator IP of the DR file system.
o Log in to any CLI of your choice using SSH.
script to Break the SnapMirror Relationship:

 

snapmirror break -destination-path virgsva:data_cp
snapmirror break -destination-path virgsva:log_cp

 

Status of the volumes

carine_6-1748886802147.png

 

 

On the cluster, the manager refreshes or performs a failover to any node in the DR region to bring the SQL Server resources online.

carine_7-1748886856905.png

 

 

Failover over to node 1(WinFCISQLvirN01)

carine_8-1748886893175.png

 

 

Log in to SQL Server Management Studio (SSMS) to verify that all deleted tables remain deleted.

 

carine_9-1748886928239.png

 

The state of the volumes becomes read/write (RW) at the DR region (Virginia) after breaking the SnapMirror relationship.

carine_10-1748887001975.png

 

 

Let’s create a database (Dr_DemoDB) at the DR region (Virginia).

 

carine_11-1748887059204.png

 

Let create a Folder at the volume level.

 

carine_12-1748887097613.png

 

Bring the Nodes Online and resyncing the data.
With all nodes having read and write permissions, there is a failover between the nodes.
The DR_DemoDB created at the DR nodes is missing. We need to perform data resync between the active region (Oregon) and DR region (Virginia)

 

carine_0-1748888016625.png

 

DR_DemoDB present at the DR node.

 

carine_1-1748888103813.png


All nodes are failover.

carine_2-1748888157819.png

 

 

Resyncing the data.
Script to resync the volumes at the Source region (Virginia) to DR region (Oregon)

snapmirror resync -source-path virgsva:data_cp -destination-path trisqlva:data
snapmirror resync -source-path virgsva:log_cp -destination-path trisqlva:log

 

carine_3-1748888253221.png

 

Resyncing the volumes on the DR region
snapmirror resync -source-path virgsva:data_cp -destination-path trisqlva:data
snapmirror resync -source-path virgsva:log_cp -destination-path trisqlva:log

 

carine_4-1748888377281.png

 

Verifying data has resync at the active region (Oregon)

carine_5-1748888412308.png

 

To make the source volume active you need to break the relationship. And move the volume from DP to RW.

 

carine_6-1748888449364.png

 

Installing SnapCenter

To install SnapCenter some prerequisites must ne meets.
You must have installed the dotnet-hosting 8.5 version and Powershell-7.4.2

To install SnapCenter locate the media file and click on open.

 

carine_7-1748888902715.png

 


"Click 'Next' on the welcome wizard.
Click 'Next' again and allow the installation to run to completion.

carine_8-1748888936708.png

 

 

Click 'Finish'. The installation took about 5 minutes to complete.

carine_9-1748888996159.png

 

Open SnapCenter application with any browser of your choice.
Enter username and password

 

carine_10-1748889060170.png

 

Click on the Get Started to follow the steps in configuration.

 

carine_11-1748889110605.png

 

Add storage connections. Click on the hyperlink; it will take you directly to the storage system. Enter both FSx file systems.
File system Virginia

carine_12-1748890137437.png

 

File system Oregon

 

carine_13-1748890345658.png

 

Both File systems were added.

Add your domain into SnapCenter.

 

carine_14-1748890395398.png

 

Domain added.

 

carine_15-1748890420626.png

 

Add your credentials: click on credentials on the top and select new enter your credentials information.

 

carine_16-1748890451674.png

 

Add the host. Click the host and select new enter the host information by checking the box Microsoft SQL Server and Microsoft Windows and click on submit.

 

carine_17-1748890533027.png

 

It is going to validate the host, ensuring that it meets the requirements, register the host, and finally install the SQL Server plugin.

carine_18-1748890589523.png

 

SQL server plugins are being added to all the host.

carine_19-1748890621191.png

 

SQL Server Plugin fully installed.

 

carine_20-1748890779273.png

 

Click on the disk to make sure all the disks are present.

carine_21-1748890814120.png

 

Moved to the resource tab to make sure all the sql server resources are presence.
Click on the configure log directory.
The FCI Instance is pull in click on browser and select the file path

carine_22-1748890845443.png

 

Click on save. The overall status changes to running. SnapCenter is full configure to backup SQL Server databases.

 

carine_23-1748890876958.png

 

Creating a Full and Log backup Policy

To create a Full Backup policy, go to Settings where the Policy is selected by default. Click on 'New' to start creating a policy.

Enter the policy name and a brief description of the policy.

 

carine_24-1748890971167.png

 

Under Policy Type, select Full Backup.

 

carine_25-1748891402451.png

 

Select the schedule frequency as per PTO/RPO

carine_26-1748891832407.png

 

 

Under Replication and Backup, select 'Update SnapMirror after creating a local Snapshot copy' and from the drop-down menu, select 'Hourly'.

 

carine_27-1748891984468.png

 

Skip the script and Verify tab.
Click finish on Summary.

 

carine_28-1748892020900.png

 

Creating a Log backup

Still in the Policy section, click 'New' and enter the name and a detailed description.

carine_29-1748892067189.png

 

Under Policy Type, select Log Backup.

carine_30-1748892107491.png

 

Under Schedule Frequency, select Hourly.

 

carine_31-1748892138160.png

 

Under Replication and Backup, check the box for 'Update SnapMirror after creating Snapshot copy'.
Move to the Secondary Policy Label and select 'Hourly' from the drop-down menu.

 

carine_32-1748892192840.png

 

Skip the Script and Verification tabs, and click 'Finish' on the Summary tab

 

carine_33-1748892283391.png

 

Full and Log Backup policies have been fully created.

carine_34-1748892327793.png

 

Move to the Resource tab and select the database (DemoDB) to back up.
Select the policy from the drop-down menu and click the plus sign to set the schedule.

carine_35-1748892411969.png

 

If there are multiple verification servers, select the appropriate one and click on the Load Locator to pull up the destination volumes.

 

carine_36-1748892592368.png

 

Click 'Finish' on the Summary page.

carine_37-1748892625850.png

 

Click on 'Backup Now' and then click on 'Backup'.

 

carine_38-1748892662667.png

 

Backup completed.

 

carine_39-1748892756337.png

 

SnapMirror backup Topology

 

carine_40-1748892813851.png

 

Performing a Log backup Policy.
Select 'Modify' from the top right and click 'Next' on the Protect Database page.
Select the policy from the drop-down menu, click the plus sign, and set the time.

 

carine_41-1748892908336.png

 

Click on the Load Locator to plus up the volumes.

 

carine_42-1748892957711.png

 

Skip the Notification tab and click 'Finish' on the Summary tab.

 

carine_43-1748893002058.png

 

Run an on-demand log backup to verify everything is working correctly, with no failures or warnings.
Select 'Backup Now', choose the log backup policy, and click 'Backup'.

 

carine_44-1748893056292.png

 

Log backup completed.

 

carine_45-1748893096719.png


Perform Database Restore (Data Protection)

Restoring SQL Server Database to the DR Region Using SnapCenter
The following steps must be followed.
• To bring the DR node up and running, the SnapMirror relationship must be broken.
• Activate DR in SnapCenter
• Proceed to restore the database.
Let's delete some tables and then restore the database.
Open Node 1 (WinFCISQLore01) to view the state of the database before deleting tables’ (dbo. Table4 and dbo. Table5.)

 

carine_46-1748893192328.png

 

The database states after the tables have been deleted.

 

carine_47-1748893225978.png

 

Pause the two active nodes to trigger a failover, bringing the DR nodes online.

 

carine_48-1748893294236.png

 

Script to break the SnapMirror relationship.

snapmirror break -destination-path virgsva:data_cp
snapmirror break -destination-path virgsva:log_cp
snapmirror break -destination-path virgsva:logdirectory_cp

 

carine_49-1748893341930.png

 

Open the Cluster Manager and manually fail over to the DR node (WinFCISQLvir01)

 

carine_50-1748893378018.png

 

On the SnapCenter server, go to Settings, select Global Settings, and click on Disaster Recovery

 

carine_51-1748893433718.png

 

Check the box for "Enable Disaster Recovery" and click Apply.

carine_52-1748893467668.png

 

When a pop-up window appears to enable Disaster Recovery, click OK

 

carine_53-1748893510210.png

 

Move to the Resources tab and select the database to restore.
Since the database is in a failed state, click on Mirror Copies and select the most recent backup copy

.

carine_54-1748893559304.png

 

After selecting the most recent backup, click on Restore.

 

carine_55-1748893606088.png

 

The restore wizard opens with the destination volumes populated. Click Next.

carine_56-1748893657160.png

 

Leave the default setting, "Restore the database to the same host where the backup was created.”

 

carine_57-1748893699195.png

 

Choose one of the four log restore options; in this case, select the option "Restore by a log backup until."

carine_58-1748893739656.png

 

 

Check the box "Overwrite the database with the same name" during restore.

 

carine_59-1748893777775.png

 

Click Next on the Post-Restore Options.

 

carine_60-1748893804970.png

 

Click Next on the Email Settings.

carine_61-1748893828069.png

 

Click Finish on the Summary.

carine_62-1748893859524.png

 

Move to the Monitor tab to track the progress of the job. The restore job has been completed.

 

carine_63-1748893914388.png

 

Move over to the node (WinFCISQLvirN01) to verify that the tables have been successfully restored.

 

carine_64-1748893956932.png

 

 

Public