Tech ONTAP Blogs
Tech ONTAP Blogs
Previously, BlueXP Backup and Recovery Service had provided DataLock and Ransomware support only for FlexVolumes. With the June ’23 release, DataLock and Ransomware feature support has been extended to FlexGroup Volumes as well. It provides a mechanism to lock the Cloud Snapshots replicated via SM-C and provides the ability to detect a ransomware attack and recover a consistent copy of the cloud snapshot. The solution uses both SM-C and ADC to achieve the above functionality.
• Supported sources: On-prem and CVO, minimum ONTAP version 9.13.1, ADC 1.9 and above.
• Supported destination: AWS, StorageGRID, and Azure
• Supported deployment: Standard (SaaS), Restricted (Gov Cloud), and Private(Dark-Site) deployments
• Applicable only for new backup activation (not applicable on existing backup copies)
• No interoperability with archival policy
• Currently not supported on GCP and ONTAP S3
• Directory restores from FlexGroup volume backup are not supported.
To enable DataLock and Ransomware Protection, we have to choose the appropriate mode “Governance or Compliance” under the “DataLock and Ransomware Protection” section of the “Define Policy” UI of the “Activate Backup for Working Environment” wizard as shown below
• If you choose the “Governance” mode, users with specific permission can overwrite or delete protected backup files during the retention period. However, for ”Compliance” mode no users can overwrite or delete protected backup files during the retention period.
Important:-
Make sure that the “DataLock and Ransomware Protection” is set while enabling the BlueXP Backup and Recovery service on the Working Environments.
1. Please note that if you miss setting the “DataLock and Ransomware Protection” while enabling the BlueXP Backup and Recovery service on the Working Environments, you will not be able to enable the feature thereafter.
2. DataLock protection mode (Governance and Compliance) cannot be changed after the policy is created.
3. Disabling or modifying DataLock and Ransomware Protection feature is not possible.
When “DataLock and Ransomware Protection” is enabled, the cloud bucket that will be provisioned as a part of the backup activation process will have object locking enabled. It will also have Auto-purging of non-current versions on the bucket will be enabled and set to 1 day.
In this section, we will discuss BlueXP Backup and Recovery policy behavior when Backup is enabled with a Governance or Compliance policy in the Working Environment.
Working Environment Level Behaviour
• If the policy used while enabling BlueXP Backup and Recovery service on the Working Environment is a Governance or Compliance policy, you cannot create normal policies thereafter.
• Policies with DataLock and Ransomware Protection feature cannot be created on an already BlueXP Backup and Recovery-enabled Working Environment.
• If the initial policy is a Governance policy, you can create different Governance policies and change the FlexGroup volume to a different Governance policy. Please note that you will not be able to create a Compliance enabled Policy.
• If the initial policy is a Compliance policy, you can create a different Compliance policy and change the volume to a different Compliance policy. Please note that you will not be able to create a Governance enabled Policy.
FlexGroup Volume Level Behaviour
• If the FlexGroup volume is assigned a Governance policy, it cannot be changed to a normal policy.
• If the FlexGroup volume is assigned a Compliance policy, it cannot be changed to a normal policy.
• If the FlexGroup volume is assigned a normal policy, it cannot be changed to Governance or Compliance policy as the DataLock and Ransomware Protection feature cannot be enabled on an already CBS-enabled Working Environment.
To lock an object, cloud providers provide a way to set the ‘Retention Until Date’ (RUD which is calculated based on the Snapshot Retention Period) in the object metadata during which the object version cannot be deleted or overwritten.
What is Snapshot Retention Period (SRP) and how is it calculated?
When “DataLock and Ransomware Protection” is enabled through the BlueXP Backup and Recovery policy, the Snapshot Retention Period(SRP) is calculated as per the label and retention count defined by the user in the BlueXP Backup and Recovery policy.
The minimum SRP that will be assigned would be 30 days.
Let's try to understand how the Snapshot Retention Period (SRP) is calculated:
• If the user chooses the Daily label with Retention Count 20 ( The SRP is 20 days which defaults to 30 days.)
• If the user chooses the Weekly label with Retention Count 4 (The SRP is 28 days which defaults to 30 days.)
• If the user chooses the Monthly label with Retention Count 3 (The SRP is 90 days)
• If the user chooses the Yearly label with Retention Count 1 (The SRP is 365 days)
DataLock on an object is set by applying a retention period to an object version explicitly by specifying a “Retain Until Date or RUD” for the object version. Amazon S3 stores the Retain Until Date setting in the object version's metadata and protects the object version until the retention period expires.
What is Retention Until Date (RUD) and how is it calculated?
• The 'Retention Until Date' (RUD) is computed based on the SRP which is recorded in the metadata of the object while transferring using SM-C.
• The 'Retention Until Date’ (RUD) is calculated by summing the SRP and the Buffer.
• Buffer = Buffer for Transfer Time (6 days) + Buffer for Cost Optimization (8 days). Buffer is set as 14 days.
• Minimum RUD will be 30 days + 14 days
Example:-
• If you create a Monthly backup schedule with 12 retentions, your backups are locked for 12 months (plus 14 days) before they are deleted (replaced by the next backup file).
• If you create a backup policy that creates 30 daily, 7 weekly, and 12 monthly backups there will be three locked retention periods. The "30 daily" backups would be retained for 44 days (30 days plus 14 days buffer), the "7 weekly" backups would be retained for 9 weeks (7 weeks plus 14 days), and the "12 monthly" backups would be retained for 12 months (plus 14 days).
• If you create an Hourly backup schedule with 24 retentions, you’d think that Backups are locked for 24 hours. However, since that is less than the minimum of 30 days, each backup will be locked and retained for 44 days (30 days plus 14 days buffer).
Please Note:-
Be aware that old backups are deleted after the DataLock Retention Period expires, not after the backup policy retention period.
• The DataLock retention setting overrides the policy retention setting from your backup policy. This could affect your storage costs as your backup files will be saved in the object store for a longer period of time.
How do we set Retention Until Date (RUD) on the cloud backups?
• BlueXP Backup and Recovery uses the snapshot list REST API in Active Data Connector (ADC) to determine all the snapshots that are yet to be locked based on the SnapMirror Policy.
• For each of these snapshots, it uses ADC to stamp the RUD in all the objects belonging to the snapshot. This guarantees that the snapshot is locked until the RUD expires.
Ransomware Scan
In this section, we will examine how Ransomware detection scans are run by BlueXP Backup and Recovery. As soon as you enable BlueXP Backup and Recovery in the Working environment and configure "DataLock and Ransomware Protection," the ransomware scans are initiated. The ransomware scans are run in the below-mentioned scenarios.
• The scans on the cloud backup objects are initiated soon after they are transferred to the cloud object store. The scan is not performed on the backup file when it is first written to cloud storage, but when the next backup file is written
• The Ransomware Scans can be initiated when the backup is selected for the restore process.
• The scan can also be done on-demand as required by the user.
How does the scan work?
Now let's try to understand how the Ransomware scans work.
• Before the Ransomware scans are initiated, BlueXP Backup and Recovery checks to make sure that the snapshot is stamped.
• BlueXP Backup and Recovery service employs the Active Data Connector Integrity Checker REST API to start Ransomware scanning as necessary.
• The detection of ransomware attacks is performed using checksum comparison.
• The Active Data Connector Integrity Checker REST API triggers a Ransomware scan of the cloud backup objects on the cloud object store by verifying the checksum of the different backup object versions.
• Based on the result of the scan BlueXP Backup and Recovery initiates the recovery process. Snapshots can be scanned on an on-demand basis for any ransomware attacks if necessary.
How does the Recovery process work?
When a Ransomware attack is detected, BlueXP Backup and Recovery uses the Active Data Connector Integrity Checker REST API to start the recovery process. The oldest version of the data objects is the source of truth and is made into the current version as part of the recovery process.
Let's see how this works:-
• In the event of a ransomware attack, the Ransomware tries to overwrite/ delete the object in the bucket.
• Since the buckets are versioning-enabled it automatically creates a new version of the backup object. If an object is deleted with versioning turned on, it is only marked as deleted but is still retrievable. If an object is overwritten, previous versions are stored and marked.
• When a Ransomware scan is initiated, the checksums are computed for both object versions and compared, If the checksums are inconsistent, potential ransomware has been detected.
• The recovery process involves reverting to the last known good copy. V3 is created (duplication of V1)
Please Note:-
• DataLock and Ransomware Protection feature scans only cloud backups. It does not support scanning local snapshots, and Ransomware attacks on local snapshots cannot be detected.
Understanding the UI for DataLock and Ransomware Protection for FlexGroup
In this section, we will look into the various BlueXP Backup and Recovery service UI changes that were introduced to show the status and results of the DataLock and Ransomware Scan run on the cloud backups stored in the Cloud object store
Backup Volume Page
A new “Ransomware Scan” column has been introduced on the Backup Volume Page. It displays the different status of the Ransomware scans on a FlexGroup Volume level like potential ransomware identified, tool-tip showing the last scan time, and successful ransomware scan with scan time.
Backup Details Page
A new “Ransomware Scan” column has been introduced on the Backup Details Page. It displays the different statuses of the Ransomware scans on the backup level like potential ransomware identified, tool-tip showing the last scan time, ransomware scan failure with scan time, and successful ransomware scan with scan time.
Canvas Page
Notifications have been included on the Canvas Page which notifies that a potential ransomware attack has been identified on a backup copy of a specific FlexGroup volume related to a specific Working Environment.
Browse and Restore Pages
A new “Ransomware Scan ” column has been introduced on the Selected Backup Details Page. It displays the different status of the Ransomware scans on the backup level like potential ransomware identified, tool-tip showing the last scan time, ransomware scan failure with scan time, and successful ransomware scan with the scan time.
Browse and Restore Pages – Restore Message
A “Ransomware Scan” UI will be shown upon selecting a snapshot to restore the backup. This restore confirmation message shows the details of the DataLock mode, and the last run scan time information and also includes a recommendation to run a ransomware scan before proceeding with the scan. This is an optional scan, the user can uncheck it to skip the ransomware scan.
Search and Restore Page
More details about the ransomware scan have been provided on the “Search to Restore Volume Backup Details" right navigation pane UI. It displays the different status of the ransomware scans on the backup level like potential ransomware identified, tool-tip showing the last scan time, ransomware scan failure with scan time, and successful ransomware scan with the scan time
Search and Restore Page- Restore UI
The “Restore Location for Selected File” UI under the Search and Restore feature, also now display the information of the backup DataLock Mode and the status of the ransomware scan run.
Clicking the “Next” button will bring up a “Ransomware Scan” UI, which displays the “DataLock” mode, the previous scan time, and the result of the ransomware scan. It also shows a recommendation to run a ransomware scan before proceeding with the restore process. This is an optional scan, the user can uncheck it to skip the ransomware scan.
In the June ’23 release, BlueXP Backup and Recovery service introduced a backup inventory report as part of its reporting feature. This report will encompass the entirety of the protected volumes within a specified scope, which can be an Account, Working Environment, or SVM. The report will provide comprehensive information about the available backup copies for each volume within the selected scope, including their key attributes.
This addition will greatly enhance the ability to perform efficient analysis of the backup data. This report will help users to ensure that:-
• all critical data within a specific working environment adheres to the organization's designated protection and security policies,
• allows users to confirm that all desired volumes have been successfully backed up at the intended point in time,
• provide a comprehensive overview of backup status and availability for the organizational production data
• by regularly reviewing and analyzing these reports, organizations can ensure that their data backup and recovery processes align with their recovery time objectives (RTOs) and recovery point objectives (RPOs), minimizing potential downtime and preventing ransomware attacks.
How do I create Backup Inventory Report?
A new “Report” tab has been introduced on the “BlueXP Backup and Recovery” ribbon, where the users will be able to download the Backup Inventory Report. There are two scopes available from which the Backup Inventory Report can be generated. You can generate the report on the “Account” level or you can select the scope from which you would like to download the report.
If you choose to generate the report on the “Account” Level, the Backup Inventory report for all the “Working Environment”, “SVMs” and “Volumes” will be generated.
If you choose “Custom”, you can choose the scope from which you would like to generate a Backup Inventory report. You can choose a specific set of Working Environments or SVMs.
Click on “Create Report”
Clicking on “Create Report”, will create a report as shown above which contains a mini dashboard that shows the summary of the Account, Volume Backup Status (protected and unprotected volumes), the distribution of backup copies, and a grid that gives the complete details of the backed up volumes. The grid shows the details of the Backup Copies, Last Backup Time and Date, Backup Policy Name, Backup Location, Backup Account ID, Backup Encryption Style, Ransomware Protection, and Archive Policy.
You can download the report by clicking on the “Download CSV” button available in the top-right-hand corner of the Volume details grid.
If you have implemented the Private(Dark-Site) BlueXP Connector for the purpose of backing up your on-premise ONTAP to the StorageGRID object store bucket, it is crucial to regularly create backups of the MySQL database and Index Catalog files to mitigate the risks associated with infrastructure outages of the BlueXP Connector. This backup will enable you to deploy a new Connector and restore the critical BlueXP Backup and Recovery data.
In the latest release of the Private or (DarkSite) BlueXP Connector, backups of the MySQL database and Index Catalog files are taken every 24 hours and uploaded to the bucket where the volume snapshots are backed up. With this feature in place, administrators need not take backups of the MySQL database and Index Catalog files manually as recommended before.
Please Note:-
In a SaaS environment or when utilizing BlueXP Backup and Recovery with the BlueXP Connector deployed either at a cloud provider or on a host system with internet connectivity, all critical BlueXP backup and recovery configuration data is securely backed up and stored in the cloud. However, in a site without internet access, commonly referred to as "restricted mode" or a "dark-site," the BlueXP backup and recovery information is exclusively stored on the local Connector system.
There are 2 types of data that you need to back up:
• BlueXP Backup and Recovery database
• Indexed Catalog files (used for Search & Restore functionality)
Automated MySQL database and Index Catalog Backup
Let’s try to understand how the automated backup of the MySQL database and Index Catalog backup works in Private (Dark-site) connector deployment.
a) The Private or (Dark-Site) BlueXP Connector automates the backup of the MySQL database and Index Catalog Backup
b) The backup is taken every 24 hours.
c) The automated backups will be stored in the StorageGRID bucket where the regular volume backups will be stored.
d) BlueXP Backup and Recovery service will execute mysqldump operation to create a backup of the database, and the resulting backup file is transferred to the bucket created for backing up the first working environment. The backup sql file will be uploaded to a folder by the name “mysql_backup” in the bucket.
e) In order to perform a restore, navigate to the bucket created for backing up the first working environment and copy the sql file under the “mysql_backup” folder to do the necessary restore and recovery procedures mentioned in the restore section.
f) However, if administrators prefer to have regular backups at custom intervals, they still have the option to manually initiate the backup process.
Manual MySQL database and Index Catalog Backup
In this section, we will discuss the steps to back up the MySQL database and Index Catalog manually. You can execute these steps manually to take backup at the required interval or use the following commands in a script to automate the backup process.
1. Make sure to take backup at regular intervals of the Dark-Site CBS Database
a) Login to the Dark-Site connector using the appropriate credentials
b) Enter the MySQL container shell, using the command "docker exec -it ds_mysql_1 sh"
c) On the container shell, deploy the "env"
d) Note the password of the MySQL DB by copying the value of the key "MYSQL_ROOT_PASSWORD"
e) Restore the BlueXP Backup and Recovery MySQL DB using the command "mysqldump --user root --password -p cloud_backup --result-file=mysql.dump.cloud_backup.sql"
f) Copy the backups from the MySQL docker container using the command "docker cp ds_mysql_1:/mysql.dump.cloud_backup.sql ."
g) Make sure to take the backup at regular intervals
h) Copy the backup to a secure location
2. Backup the Indexed Catalog Files
On the Dark Site connector VM, change the directory to "/opt/application/netapp/cbs"
a) Make sure to identify the Index Catalog folder, with starts with the string "catalog"
b) Now zip the catalog******** folder, using the command "zip -r catalog******.zip catalog******"
c) Make sure to take the backup at regular intervals
d) Copy the backup to a secure location
Restoring MySQL database and Index Catalog Backup to a New Connector
If your on-premises Connector has a catastrophic failure, you’ll need to install a new Connector, and then restore the BlueXP backup and recovery data to the new Connector.
There are 4 tasks you’ll need to perform to return your BlueXP backup and recovery system to a working state:
In this section, we will discuss how to restore the backed-up MySQL database and Index Catalog on a newly installed Private (Dark-Site) connector Deployment.
1. Install the Dark-Site connector on a new VM
a) Download the Dark-Site Installer from the NetApp Support Site.install the Dark-Site connector on a new VM.
b) Log in to the NetApp Support Site, and navigate to the Download page for installing Cloud Manager on the Red Hat Enterprise Linux platform. https://mysupport.netapp.com/products/index.html
c) Download the Cloud Manager dark site installer Cloud-Manager-Connector-offline-v3.9.xx.zip file to a directory on the target system.
d) Verify the checksum to ensure that the software downloaded correctly.
e) Install the Dark-Site connector on a new VM.
1. Verify that docker is enabled and running.
sudo systemctl enable docker && sudo systemctl start docker
2. Copy the installer to the Linux host.
3. Assign permissions to run the script.
chmod +x /path/ Cloud-Manager-Connector-offline-v3.9.xx
4. Run the installation script:
sudo /path/ Cloud-Manager-Connector-offline-v3.9.xx
f) After the installation is complete, give in the connector name, name of the organization, give in name of the user, email, and password.
g) Review all the details and click the review check box and click on "create"
h) The new user will be created and will be redirected to the login screen.
i) For more detailed instructions on how to install Dark-Site connector, check out the following blog.
2. Restore the MySQL Backups
a. Copy the MySQL backups from the secure location onto the new VM where the connector is installed.
b. Copy the backups into the MySQL docker container using the command "docker cp mysql.dump.cloud_backup.sql ds_mysql_1:/."
c. Enter the MySQL container shell, using the command "docker exec -it ds_mysql_1 sh"
d. On the container shell, deploy the "env"
e. Note the password of the MySQL DB by copying the value of the key "MYSQL_ROOT_PASSWORD"
f. Backup the BlueXP Backup and Recovery service MySQL DB using the command "mysql -u root -p cloud_backup < mysql.dump.cloud_backup.sql"
g. Verify if the BlueXP Backup and Recovery service MySQL DB has been restored using the SQL command
# mysql -u root -p cloud_backup , Enter password.
mysql> show tables;
mysql> select * from volume; Check if the volumes are shown similar to the previous Dark-Site environment
3. Restoring the Indexed Catalog files
a) Copy the backed-up catalog zip files into the /opt/application/netapp/cbs folder.
b) Unzip the catalog******.zip file using the command "unzip catalog******.zip"
c) Do an "ls" to make sure that the folder "catalog******" has been created with the "changes" and "snapshots" folders underneath it.
4. Discover the Working Environments
a) Make sure to discover all the working environments that were available on the previous Dark-Site environment
b) Click "Add Working Environment"
c) Click on "On-Premise" and choose type as "On-Premise ONTAP"
d) Now given in the Cluster Management IP Address, username, and password.
e) Repeat the procedure until all the Working Environments are discovered.
5. Setting up the StorageGrid Environment Details
a) In this step we will be adding details of the StorageGrid associated with the Working Environments as per Source Dark-Site Setup
b) In the first setup , extract the authorization token using the following oauth/token API
curl 'http://10.193.192.202/oauth/token' -X POST -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100101 Firefox/108.0' -H 'Accept: application/json' -H 'Accept-Language: en-US,en;q=0.5' -H 'Accept-Encoding: gzip, deflate' -H 'Content-Type: application/json' -d '{"username":"xxxxxx@netapp.com","password":"xxxxxxx","grant_type":"password"}
> '
{"expires_in":21600,"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IjJlMGFiZjRiIn0.eyJzdWIiOiJvY2NtYXV0aHwxIiwiYXVkIjpbImh0dHBzOi8vYXBpLmNsb3VkLm5ldGFwcC5jb20iXSwiaHR0cDovL2Nsb3VkLm5ldGFwcC5jb20vZnVsbF9uYW1lIjoiYWRtaW4iLCJodHRwOi8vY2xvdWQubmV0YXBwLmNvbS9lbWFpbCI6ImFkbWluQG5ldGFwcC5jb20iLCJzY29wZSI6Im9wZW5pZCBwcm9maWxlIiwiaWF0IjoxNjcyNzM2MDIzLCJleHAiOjE2NzI3NTc2MjMsImlzcyI6Imh0dHA6Ly9vY2NtYXV0aDo4NDIwLyJ9.CJtRpRDY23PokyLg1if67bmgnMcYxdCvBOY-ZUYWzhrWbbY_hqUH4T-114v_pNDsPyNDyWqHaKizThdjjHYHxm56vTz_Vdn4NqjaBDPwN9KAnC6Z88WA1cJ4WRQqj5ykODNDmrv5At_f9HHp0-xVMyHqywZ4nNFalMvAh4xESc5jfoKOZc-IOQdWm4F4LHpMzs4qFzCYthTuSKLYtqSTUrZB81-o-ipvrOqSo1iwIeHXZJJV-UsWun9daNgiYd_wX-4WWJViGEnDzzwOKfUoUoe1Fg3ch--7JFkFl-rrXDOjk1sUMumN3WHV9usp1PgBE5HAcJPrEBm0ValSZcUbiA"}
c) Extract the Working Environment ID and the X-Agent-Id using the tenancy/external/resource API. In the response, the value under the “resourceIdentifier” denotes the WorkingEnvironment Id and the value under "agentIds" denotes x-agent-id
curl -X GET "http://10.193.192.202/tenancy/external/resource?account=account-DARKSITE1" -H 'accept: application/json' -H 'authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IjJlMGFiZjRiIn0.eyJzdWIiOiJvY2NtYXV0aHwxIiwiYXVkIjpbImh0dHBzOi8vYXBpLmNsb3VkLm5ldGFwcC5jb20iXSwiaHR0cDovL2Nsb3VkLm5ldGFwcC5jb20vZnVsbF9uYW1lIjoiYWRtaW4iLCJodHRwOi8vY2xvdWQubmV0YXBwLmNvbS9lbWFpbCI6ImFkbWluQG5ldGFwcC5jb20iLCJzY29wZSI6Im9wZW5pZCBwcm9maWxlIiwiaWF0IjoxNjcyNzIyNzEzLCJleHAiOjE2NzI3NDQzMTMsImlzcyI6Imh0dHA6Ly9vY2NtYXV0aDo4NDIwLyJ9.X_cQF8xttD0-S7sU2uph2cdu_kN-fLWpdJJX98HODwPpVUitLcxV28_sQhuopjWobozPelNISf7KvMqcoXc5kLDyX-yE0fH9gr4XgkdswjWcNvw2rRkFzjHpWrETgfqAMkZcAukV4DHuxogHWh6-DggB1NgPZT8A_szHinud5W0HJ9c4AaT0zC-sp81GaqMahPf0KcFVyjbBL4krOewgKHGFo_7ma_4mF39B1LCj7Vc2XvUd0wCaJvDMjwp19-KbZqmmBX9vDnYp7SSxC1hHJRDStcFgJLdJHtowweNH2829KsjEGBTTcBdO8SvIDtctNH_GAxwSgMT3zUfwaOimPw'
[{"resourceIdentifier":"OnPremWorkingEnvironment-pMtZND0M","resourceType":"ON_PREM","agentId":"vB_1xShPpBtUosjD7wfBlLIhqDgIPA0wclients","resourceClass":"ON_PREM","name":"CBSFAS8300-01-02","metadata":"{\"clusterUuid\": \"2cb6cb4b-dc07-11ec-9114-d039ea931e09\"}","workspaceIds":["workspace2wKYjTy9"],"agentIds":["vB_1xShPpBtUosjD7wfBlLIhqDgIPA0wclients"]}]
d) Now, let’s use API to update the database with the details of the StorageGrid associated with the Working Environments as per Source Dark-Site Setup. Make sure to give in the Fully Qualified Domain Name of the StorageGRID, Access-Key, and Storage-Key in the body as given below
curl -X POST 'http://10.193.192.202/account/account-DARKSITE1/providers/cloudmanager_cbs/api/v1/sg/credentials/working-environment/OnPremWorkingEnvironment-pMtZND0M' \
> --header 'authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IjJlMGFiZjRiIn0.eyJzdWIiOiJvY2NtYXV0aHwxIiwiYXVkIjpbImh0dHBzOi8vYXBpLmNsb3VkLm5ldGFwcC5jb20iXSwiaHR0cDovL2Nsb3VkLm5ldGFwcC5jb20vZnVsbF9uYW1lIjoiYWRtaW4iLCJodHRwOi8vY2xvdWQubmV0YXBwLmNvbS9lbWFpbCI6ImFkbWluQG5ldGFwcC5jb20iLCJzY29wZSI6Im9wZW5pZCBwcm9maWxlIiwiaWF0IjoxNjcyNzIyNzEzLCJleHAiOjE2NzI3NDQzMTMsImlzcyI6Imh0dHA6Ly9vY2NtYXV0aDo4NDIwLyJ9.X_cQF8xttD0-S7sU2uph2cdu_kN-fLWpdJJX98HODwPpVUitLcxV28_sQhuopjWobozPelNISf7KvMqcoXc5kLDyX-yE0fH9gr4XgkdswjWcNvw2rRkFzjHpWrETgfqAMkZcAukV4DHuxogHWh6-DggB1NgPZT8A_szHinud5W0HJ9c4AaT0zC-sp81GaqMahPf0KcFVyjbBL4krOewgKHGFo_7ma_4mF39B1LCj7Vc2XvUd0wCaJvDMjwp19-KbZqmmBX9vDnYp7SSxC1hHJRDStcFgJLdJHtowweNH2829KsjEGBTTcBdO8SvIDtctNH_GAxwSgMT3zUfwaOimPw' \
> --header 'x-agent-id: vB_1xShPpBtUosjD7wfBlLIhqDgIPA0wclients' \
> -d '
> { "storage-server" : "cbssr630ip15.rtp.openenglab.netapp.com:10443", "access-key": "2ZMYOAVAS5E71V0MCNH9", "secret-password": "uk/6ikd4L+UjlXQOFnzSzP/T0zR4ZQlG0w1xgWsB" }'
6. Recovery Verification
a) Click on the required Working Environment
b) Click on "Backup and Recovery", and Click on "View Backups"
c) Now you should be able to see all the backups without any issues
d) Go to the Indexed Catalog settings and make sure that the Working Environments which had Indexed Cataloging enabled in the previous DarkSite connector remain enabled after the recovery process on the new VM.
e) Run a few catalog searches to confirm that the Indexed Catalog restores and recovery have been completed successfully.
With the introduction of filters and the provision to expand attribute columns on the Job Monitoring Grid on the Dashboard, BlueXP Backup, and Recovery service allowed users to track the status of the backup process affiliated with a specific Working Environment, Storage Virtual Machine, or a specific operation like backup and restore.
With the June ’23 release, we have enriched the Job Monitoring Grid by including two additional columns: the “Backup Policy” column and the “Label” column. This enables users to view the details of the jobs with a backup policy and Snapshot label ( weekly, monthly etc.)
Please note the following behaviour:-
• Policy name is reflected in Job monitor for all backup jobs while activating backup on Working Environment
• Policy name is not reflected for Adhoc backup jobs.
• Snapshot label is only reflected for scheduled backup jobs.
Based on the customer feedback received on the new restore details page of the Job Monitoring UI, we have introduced additional job details in the restore job details page. Two new restore information, namely "Backup Name" and "Backup Time," have been included under the "Restore Content" ribbon as shown in the picture below.
Under the “Restore to” ribbon, it shows the “Restore Content-Type” which can be “File(s)” or “Folder”. If the restore content type is “Folder”, the new “Target Folder” information displays the folder path to which the selected source folder will be restored. If the restore content type is “File(s) or File”, the new “Target Folder” information displays the folder path to which the selected source files will be restored. Also, another grid called “Restored Files ()” will open up which will display the list of files that have been restored. The grid will show the restored file size, the restored file’s last modification date, and the target file path to which the file has been restored.
The ability to monitor backup lifecycle flows is a crucial requirement for customers, as it ensures comprehensive audit trails and accountability. This demand has become even more significant in light of the increased risk of cyber attacks targeting backup infrastructure.
With the June ’23 release, BlueXP Backup and Recovery service Job Monitoring feature now supports monitoring of ONTAP backup lifecycle jobs. With this feature, users will be able to trace the source of backup deletion from the BlueXP Backup and Recovery user interface and APIs.
The Backup Lifecycle job details can be accessed to view information through four widgets: Backup Source Details, Backup Target Details, Recycled Backup Copy Details, and Job Execution details as shown above.
• The mini dashboard in the Backup Lifecycle job details page displays the “Job Type”, “Volume Name”, “Recycled Backup Date” and “Job Status”.
• The “Backup Source Details” widget shows the “Working Environment Name”, “SVM Name”, “Volume Name”, “Volume Style” and “Backup Policy” details.
• The “Backup Target Details” widget shows the “Provider”, “Region/ Hostname”, “Account ID” and “Bucket” details.
• The “Recycled Backup Copy Details” widget shows the “Backup Name”, “Backup Date”, “Backup Age”, “Size” and “Storage Class” details.
• The “Job Execution Details” widget shows “Job Start Time” , “Job End Time”, “Job Duration” and “Job Status” details.
With the June ’23 release, BlueXP Backup and Recovery now generates notification and Email alerts when a mismatch between the labels specified in the snapshot policy and the Snapmirror policy is detected.
It was found that in certain cases, the backups were not being executed properly because there was a mismatch between the labels specified in the snapshot policy and the definitions in the snapmirror policy, making it challenging for the user to identify the issue and rectify it.
With this feature, users will be notified on the canvas with a notification alert that the labels defined in the SnapMirror policy don’t have matching labels in the Snapshot policy. It will warn the users that if the label won’t match then no backup will be created and advise the user to add the required labels to the Snapshot policy using “System Manager” or ONTAP CLI so that the backup may continue without any disruptions.
Users subscribed to the Alert and Notification feature will also receive an email notification as well.
Presently, Ubuntu OS is supported for Market-Place deployment and BlueXP UI-based connector deployments. For the latest release, BlueXP Backup and Recovery has been certified on Rocky 9 Linux OS, which is intended to replace CentOS (and RHEL). Therefore, users have the option to manually install the BlueXP Connector on their own instances with the RHEL Linux flavor to start using BlueXP Backup and Recovery.