ONTAP Discussions

Windows PCs are randomly attempting to access LIFs that they should have no reason to access

Stormont
4,183 Views

We have a nuber of PCs in various subnets that are attempting to try and access LIFs on our CDOT cluster that are in various part of our enterprise that they shoud have no reason to access and we can't figure out why.

 

For example, the cluster has a LIF with an IP of 172.19.240.7 /20 and that LIF is in an isolated VLAN that PCs do not have any direct connectivity to.  However, a Windows PC with an IP of 172.21.133.25 /24 is for some reason attempting to access the IP of that LIF.  When looking at "netstat -ano" on the PC, we see:

 

Protocol         Local Address                  Foreign Address     State              PID

TCP                   172.21.133.25:58905   172.19.240.7:135     SYN_SENT   2976

 

With PID 2976 being "Service Host: Network Service - Workstation".   While DFS does point to some shares on the NetApp cluster, none of them are on the node to which 172.19.240.7 is associated.

 

There are three other LIFs on the cluster (all similar to the one noted above in that PCs don't have connectivity to them, but the PCs are trying to contact the IP addresses of those LIFs).  What can we check to try and figure out what is attempting all of these connections?

 

 

9 REPLIES 9

TMACMD
4,162 Views

Is it possible those interface you  dont want clients accessing are using:

1. The On-Box Load Balancer

2. Off Box DNS round-robin (multiple IPs associated with the same CIFS name)

3. DDNS -> all your  LIFs are are participating in DDNS and when the client gets the DNS referral, it is going to an IP it is not supposed to?

Stormont
4,121 Views

The on box load balancer must be manually configured, correct?  If so, we aren't using it as we don't have any DNS zones configured on the cluster.

 

We don't have any DNS entries for the interfaces for the LIFs that PCs are trying to connect to.

 

DDNS is disabled.

paul_stejskal
4,107 Views

Are those LIFs in the same SVM just a different node? This document talks about how to check configuration: https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-nmg/GUID-2A6B1345-0C1D-4E3D-B01B-ED724A69D376.html?cp=11_0_10.

 

I'd recommend a packet trace honestly, to see what is being accessed. This KB is about bully workloads, but there is a nice section on packet tracing about a quarter way down and has some good commands and references: https://kb.netapp.com/app/answers/answer_view/a_id/1071353.

 

Hope this helps.

Stormont
4,094 Views

Yes, the LIFs are all in the same SVM and on the same node.  We do not have any DNS zones configured on the cluster, which if I understand things correctly means that load balancing is not configured at all?  The only way that users access these filers is via DFS shares and all of those shares are on the 03/04 nodes and not the 02 node where these LIFs that are connected to the isolated networks or DMZ networks are located.

 

Unfortunately a packet trace won't work, as this traffic is attempting to pass through our firewall to get to the filer (the LIFs in question are in our DMZ or in totally isolated VLANs) so the traffic isn't making it to the cluster; we are trying to fgure out how to even stop it from happening.

paul_stejskal
4,089 Views

That's a Microsoft question unfortunately, not a NetApp. Maybe if it connects you could see what it reaches out and accesses? This might be a good time to use the Sysinternals suite.

 

Yes that is correct, DNS RR is disabled as DNS zoning isn't specified.

TMACMD
4,062 Views

How about for one of the IP addresses from the NetApp, run this and report back:

set diag ; network interface show -lif <LIF_NAME> -instance ; set admin

 

That should give some info that may help.

TMACMD
4,063 Views

Ok, exactly HOW are you accessing the CIFS data on your NetApp?

 

Are you using a NAME, FQDN, IP?

 

If not an IP, try first doing a NSlookup on that IP and see what happens.

Also try doing an NSLOOKUP of the IP it is going to (the one you do not want it to go to)

 

maybe there is something borked in your DNS

Stormont
4,043 Views

More information regarding this, as soon as I log into a PC (before loading any applications) I see the blocked connections logged in our Check Point firewall between the PC in question and those LIFs.  Three drives are mapped at logon which are mapped via DFS.  On the DFS server, those directories are located on volumes of the 01 and 04 nodes in the cluster and are referenced via oriole-01-int and oriole-04-int.  A NSlookup or ping of those two DNS names does return the correct IP address.  A NSlookup of two of the "isolated" IPs (172.19.240.7 and 172.19.220.5) returns a non-existent domain error as expected because we do not have DNS entries for either of those interfaces.

 

Regarding the "network interface show -lif" output, output from two of the LIFs (172.19.240.7 and 172.19.220.5) that keep showing up are below.

 

Oriole::*> network interface show -lif Oriole-02_Hosting_Storage -instance

Vserver Name: oriole-svm
Logical Interface Name: Oriole-02_Hosting_Storage
Service Policy: default-data-files
Service List: data-core, data-nfs, data-cifs
(DEPRECATED)-Role: data
Data Protocol: nfs, cifs
Network Address: 172.19.240.7
Netmask: 255.255.240.0
Bits in the Netmask: 20
Is VIP LIF: false
Subnet Name: -
Home Node: Oriole-02
Home Port: e3a
Current Node: Oriole-02
Current Port: e3a
Operational Status: up
Extended Status: -
Numeric ID: 1032
Is Home: true
Administrative Status: up
Failover Policy: system-defined
Firewall Policy: data
Auto Revert: true
Sticky Flag: false
Fully Qualified DNS Zone Name: none
DNS Query Listen Enable: false
(DEPRECATED)-Load Balancing Migrate Allowed: false
Load Balanced Weight: load
Failover Group Name: Hosting_Storage
FCP WWPN: -
Address family: ipv4
Comment: -
IPspace of LIF: Default
Is Dynamic DNS Update Enabled?: false
Probe-port for Azure ILB: -


Oriole::*> network interface show -lif Oriole-02_Database_Storage -instance

Vserver Name: oriole-svm
Logical Interface Name: Oriole-02_Database_Storage
Service Policy: default-data-files
Service List: data-core, data-nfs, data-cifs
(DEPRECATED)-Role: data
Data Protocol: nfs, cifs
Network Address: 172.19.220.5
Netmask: 255.255.255.0
Bits in the Netmask: 24
Is VIP LIF: false
Subnet Name: -
Home Node: Oriole-02
Home Port: e0g
Current Node: Oriole-02
Current Port: e0g
Operational Status: up
Extended Status: -
Numeric ID: 1030
Is Home: true
Administrative Status: up
Failover Policy: system-defined
Firewall Policy: data
Auto Revert: true
Sticky Flag: false
Fully Qualified DNS Zone Name: none
DNS Query Listen Enable: false
(DEPRECATED)-Load Balancing Migrate Allowed: false
Load Balanced Weight: load
Failover Group Name: Database_Storage
FCP WWPN: -
Address family: ipv4
Comment: -
IPspace of LIF: Default
Is Dynamic DNS Update Enabled?: false
Probe-port for Azure ILB: -

Stormont
3,924 Views

Opened a support case with NetApp who suggested that we contact Microsoft about the behavior.  We found that:

 

If a PC has no drives mapped, there are no connections between that PC and Oriole.
When a drive is mapped, a connection is made on the correct oriole-0x-int interface. Soon after, rolling attempts begin involving the PC trying to connect to e0g (Database Storage), e3a (Hosting Storage), and other interfaces that the PC has no reason to connect to and for which there is no configuration in DNS, DHCP, Active Directory Sites and Services. PCs are able to establish connections to the FPolicy related LIFs on each node as they are in the 172.22.16.x subnet.

 

The connections are all epmap (endpoint mapper).

 

At this point I think our only option is firewall rules on each PC that block connections to port 135 in the three associated subnets where the LIFs (that PCs should not be connecting to) are located.

Public