We are talking about a test environment over here. Since I stumbled upon another issue on our Domain Controller I have been focussing on that in the past days. Furthermore, I quit testing the specific SnapCenter case in which I ran into the issue I initially posted.
Maybe I'll try to reproduce the issue in the (near) future. Although, I'm not sure of it at the moment.
Ended up running into the exact same issue on our environment (also a test environment) running SnapCenter 4.3 (on Windows Server 2016) and trying to create LUN's on a Windows Server 2012 R2 cluster environment. We were receiving the "No storage system connection set" error message and chasing it for weeks now both with and without the assistance of NetApp. We found that the SnapCenter server was on the domain, while the SVM was in a "workgroup"...and we were logging into and managing the SnapCenter server (including trying to create LUN's) as our domain users. When we logged into SnapCenter as a domain user, and tried to connect a LUN on an SVM which was in "workgroup", it wasn't authenticating. So even though it was giving errors related to not being able to find the storage system, it was more that it wasn't able to get the storage system details from the storage system with that user.
Add the SVM to the domain, or log into and create disks as the local "Administrator" and no issues.
You situation seems slightly different from the one we initially encountered. In our case, the Host and Storage Virtual Machine were both part of the same Active Directory domain. Nevertheless, many thanks for sharing your update/findings!
Just an update on my issue. Plugin host did not had access to 443 on cluster/svm. When I gave that access it went further on the connect disk process and stucked on putting the LUN online:
In the logs the first error is: 2020-06-25T18:12:05.7548987-03:00 Error SDW PID= TID= Error: Failed to get partition information from disk on node V-MAILBOX01
This occurs right after the disk is onlined by Snapcenter: 2020-06-25T18:12:05.6767784-03:00 Information SDW PID= TID= Microsoft DiskPart version 6.3.9600 Copyright (C) 1999-2013 Microsoft Corporation. On computer: V-MAILBOX01 DISKPART> Disk 13 is now the selected disk. DISKPART> Disk attributes cleared successfully. DISKPART> DiskPart successfully onlined the selected disk. DISKPART> Leaving DiskPart
The last message is repeated several times until the job ends not being able to connect the LUN. Filled a case with NetApp.
Many thanks to everyone who post here! (You saved me a day or two.)
I had a similar problem and solved it after reading this thread!
My config was:
1. SnapCenter 4.4 installed on domain-joined server.
2. AD suffix of the domain is "domain.local" (for example).
3. Test VM named "TEST-10G" is a workgroup computer without a specific domain suffix (this is important).
4. I manually added a record of type A to the DNS zone of the domain for the TEST-10G host.
5. When I added this VM to SnapCenter, it automatically added the domain suffix to the name of this VM, so it would appear everywhere as "TEST-10G.domain.local" (this is important). 6. Job always failed on "create a snapshot" step.
Here is log entries from SCW Plugin:
2021-03-31T23:52:33.6321583+03:00 Error SAL PID= TID= Cannot retrieve storage connection setting from SMS server. 2021-03-31T23:52:33.6321583+03:00 Error SAL PID= TID= Response error: Access denied for server: TEST-10G or timeout expired.
2021-03-31T23:52:33.6321583+03:00 Error SAL PID= TID= Invalid StorageSystemId type supplied.
2021-03-31T23:52:33.6321583+03:00 Error SAL PID= TID= Could not find valid Storage System for the resource в SnapDrive.Nsf.ServiceProviders.SALPluginFactory.GetSALPluginProvider(SDStorageSystemId storageSystemId, SmRequestBase request) в SnapDrive.Nsf.ServiceProviders.SALPluginFactory.GetSALPluginProvider(String storageSystemId, SmRequestBase request) в SnapDrive.Nsf.ServiceProviders.SALPluginFactory.CreateSnapshot(CreateSnapshotRequest request)
2021-03-31T23:52:33.6321583+03:00 Error SAL PID= TID= Could not find valid Storage System for the resource
Finally, I added the default dns suffix in the system settings of this VM and reboot. Right after that, everything worked as expected.