I am testing on using AV for RMAN nfs backup dump, say, it would do full backup weekly, and Incr daily. Based on my understanding, when the local cache is getting full, then less accessed data would get ejected. In this sense, the backup dump will never get ejected, since the volume get r/w everyday. So, in this situation, I am thinking if I could purge volume/data selectively, I can then test out if I can download data from Cloud. Otherwise, data wlll be staying on AV appliance forever.
You are correct that there is not an easy way to selectively control the data removal from the AltaVault appliance to test cloud based recovery. There is an eviction threshold setting but it is not available for adjustment via the GUI or CLI. I would suggest you use the datastore format local command and then do disaster recovery testing, as outlined in the AltaVault deployment guide (see the NetApp support site for our documentation).
My question mainly is on backups, not DR, or recovery. I did not make myself clear.
Let's say I have 40TB total of backups, with a number of mount points, so, if we cannot selectively delete backup dumps, then 40TB amount of data will be staying in local cashe "forever" on AV appliance without eviction. if we only have total of 64TB cache, then there would be only 24TB space left for other data. Correct?
Hi, Yes, you can delete data from the Linux system that has an AVA NFS export mounted. If that occurs, then AltaVault will delete the unreferenced data and associated metadata from cache and cloud. Data that is still related to other data however will not, so you'll likely not recover all the space (i.e. 1GB of deletes may only yield 300-500MB of space recovery over time from cache/cloud).