Hello, we have recently upgraded our Protection Manager from dfm 4.0.2 to OnCommand 5.2. We installed a fresh OS and made a backup of the old DFM and restored it to the new OnCommand server. From what I can tell everything seems to be working fine. However there is one dataset the takes snapshot backups to another Netapp that is failing. A similar data set with the same relationship but with different volumes is successful. I do not see much details in the logs about the failure except that there was an error.
Though it says snapvault, I guess you may have some relations because Qtree SnapMirror and SnapVault uses same replication engine as far as I know.
What is the version of ONTAP that you are running ? I also suggest you to open a support case with NetApp for the same. This is a pure ontap error message and has nothing to
I just confirmation from our folks internally that bug 624459 affects qtree snapmirror as well. Pls open a case with netapp and reference this bug to them.
Also to find the problematic file, follow the public report for bug 624459 in the link that I gave in my previous reply.
Thanks for the feedback and pointing me to the bug. We are running Ontap 7.3.6 so this is probably it.
I ran this command to locate all the hard links in the volume 'find . -type f -links +1 | xargs ls -i' redirected the output to a file and sorted it. I found over 400,000 hard links referencing a few files in someones home directory.
I'll report back tomorrow after the job runs to see if it's successful
After removing the hard links I get the same error message. "replication destination hard link create failed" Do I need to re-initialize the the snap mirror or something else to get it going again?
This is more of an ONTAP issue. I suggest you open a case against this bug and support should be able to help you. Sorry that I couldnt help you on this.
Sorry for the late update. After removing the hard links I ended up having to remove the volume from the dataset, created a new dataset, and placed the volume in the newly created dataset. It's been working again.
After correcting the hard links limitation error numerous times by deleting the hard links in the volumes I was hoping there was a better way to recover from this error. Currently I remove the hard links and then have to delete the dataset and create a new dataset job to start the backups over from scratch. Is there another method I can try to get the job working again without deleting the dataset and still work in the same backup volume? Some kind of refresh? The hard links are removed but unless I remove the dataset it still comes up with the hard links error.