2011-07-05 01:10 PM
We have been using SMSQL with a Windows SQL cluster (Active/Passive, Single SQL instance) for a few weeks with no issues, but now I'd like to integrate it with PM so that we can perform snapvaults to our DR arrays. When going through the backup configuration wizard, the option "Archive backup to Secondary Storage" is greyed out (see the below screen shot) and I an unable to figure out how to resolve this. All of my luns are in 1 lun per qtree, snapdrive has been configured to point to our protection manager server using the "sdcli dfm_config set" command, I have restarted the snapdrive service, searched the web over, etc and I still cannot make it work. We have non-SQL clusters working fine with PM so I know this does work on other servers .... just not this cluster. Any help with this would be greatly appreciated. The servers are Windows 2008 x64 SP2, with SQL 2005 SP3, SMSQL 184.108.40.2066, and SnapDrive 220.127.116.1101.
Solved! SEE THE SOLUTION
2011-07-05 01:37 PM
have you read over this KB?
How to configure SnapManager and SnapDrive Integration with Data Fabric Manager
2011-07-05 01:46 PM
Thanks for the reply. I had not seen this particular document, but I just finished reading through it. Based on the doc, I don't see anything wrong. Any further thoughts?
2011-07-06 05:32 AM
1) On the SMSQL server, run "sdcli dfm_config list" and ensure the DFM server information is correct.
2) Compare the UserName output from #1 with the same user in "dfm user list" run on the DFM server. Ensure the case matches in the entire string including the Windows domain.
3) Once you have completed #1 and #2, run the SMSQL Config Wizard and configure the dataset. If you have already done this then it will ask to reuse the datset it detected - choose Yes.
4) Using the NMC, edit the dataset and assign a secondary location. Then allow conformance to start the SV relationships.
5) Once the datset is conformant (not protected) the Archive button should be selectable in the Backup Wizard. After the first SMSQL backup the datset will be both conformant and protected.
If you have any problems with the above, open a support case so that we can collect data and troubleshoot via remote session if necessary.
2011-07-06 06:05 AM
This process is still new to me as we've only had NetApp since January and we had a consultant set up a lot of it. In any case, Step 3 is the step I had missed. Once I went through the configuration wizard and assigned a resource pool to the newly create dataset, PM started creating all the proper vault relationships for me and all looks great. I love how easy this is once you know what you're doing.
One question I still have though is I tried to assign a provision policy after the fact. The provision policy was configured to be a secondary with dedupe enabled for manual so that I could enable compression. When I attempted to apply that, I received errors like below on the conformance checking screen. Any idea why? And is there a way to configure the Provision policy to automatically enable compression as well?
The requested size 954 GB is more than the default maximum size (512 GB) allowed for deduplication
Thanks very much.
2011-07-06 06:44 AM
You are hitting the following bug.
The public report has workaround on how to fix this.
2011-07-07 01:42 PM
Kyran, Thanks so much. I have been fighting with the problem from 2 days trying to figure out why it skips the data protection screen. Saw your post and checked the DFM event log and yup the account i was using was being denied access. THANKS A MILLION.