ONTAP Discussions

Name Resolution not working in data svm

SorinAndruseac
13,384 Views

Hi all,

 

I have a strange situation and I'm stuck. Name resolution works for the admin svm but not for the data svm.

 

Here is the situation:

 

netapp11::*> net interface show
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
netapp11-01_clus1
up/up 169.254.207.30/16 netapp11-01 e0a true
netapp11-01_clus2
up/up 169.254.5.173/16 netapp11-01 e0b true
netapp11-02_clus1
up/up 169.254.208.161/16 netapp11-02 e0a true
netapp11-02_clus2
up/up 169.254.250.121/16 netapp11-02 e0b true
SVM
SVM_admin_lif1
up/up 10.207.251.170/24 netapp11-01 e0e true
SVM_cifs_lif1
up/up 10.207.255.88/24 netapp11-01 a0a-125 true
SVM_nfs_lif1 up/up 10.2.2.131/24 netapp11-01 a0a-808 true
netapp11
cluster_mgmt up/up 10.207.251.175/24 netapp11-01 e0e true
netapp11-01_mgmt1
up/up 10.207.251.173/24 netapp11-01 e0M true
netapp11-02_mgmt1
up/up 10.207.251.174/24 netapp11-02 e0M true


netapp11::cluster*> vserver services dns show
Name
Vserver State Domains Servers
--------------- --------- ----------------------------------- ----------------
SVM enabled domain.local 10.207.255.11,
10.207.255.13,
10.207.255.15
netapp11 enabled domain.local 10.207.255.11,
10.207.255.13


netapp11::cluster*> getxxbyyy getaddrinfo -vserver SVM -hostname dc.domain.local -node netapp11-01 -address-family all
(vserver services name-service getxxbyyy getaddrinfo)

Error: command failed: Failed to resolve domain.local. Reason: hostname nor servname
provided, or not known.

 

netapp11::cluster*> getxxbyyy getaddrinfo -vserver netapp11 -node netapp11-01 -hostname domain.local (vserver services name-service getxxbyyy getaddrinfo)
Host name: domain.local
Canonical Name: domain.local
IPv4: 10.207.255.13
IPv4: 10.207.255.15
IPv4: 10.207.255.11

 

or otherwise:

 

netapp11::cluster*> ping -lif cluster_mgmt -vserver netapp11 -destination dc -show-detail
PING dc.domain.local (10.207.255.11) from 10.207.251.175: 56 data bytes
64 bytes from 10.207.255.11: icmp_seq=0 ttl=127 time=0.693 ms
64 bytes from 10.207.255.11: icmp_seq=1 ttl=127 time=0.424 ms

 

netapp11::cluster*> ping -lif SVM_admin_lif1 -vserver SVM -destination dc -show-detail
ping: cannot resolve dc: Host name lookup failure

 

netapp11::cluster*> ping -lif SVM_admin_lif1 -vserver SVM -destination dc.domain.local -show-detail
ping: cannot resolve dc.domain.local: Host name lookup failure

 

netapp11::cluster*> ping -lif SVM_admin_lif1 -vserver SVM -destination 10.207.251.175 -show-detail
PING 10.207.251.175 (10.207.251.175) from 10.207.251.170: 56 data bytes
64 bytes from 10.207.251.175: icmp_seq=0 ttl=255 time=0.090 ms
64 bytes from 10.207.251.175: icmp_seq=1 ttl=255 time=0.094 ms

 

Any suggestion would be highly apreciated

Thank you

1 ACCEPTED SOLUTION

SorinAndruseac
10,507 Views

 

Hello,

 

AND BIG THANKS!!!

 

This was the question that lead to the solution:

"Can you try to ping the hostname from the CIFS lif?"

 

 

 

On switch the Vlan taging was wrong and the cifs interface was not comunicating with anyone.

I was on the asumming that the configuration of the CIFS server is to be done on the management interface of the SVM and only the data traffic is done on the CIFS interface. Aparently the CIFS interface is involved somehow in the configuration process and, if it can't comunicate on the cifs interface, the name resolution on that SVM is not working at all.

It doesn't make complete sense for me ... is a new filer and I think this can be reproduced.

 

However when we corrected the VLAN tagging on the switch everything came to normal.

 

 

 

Below is the setup for the interfaces:

 

The admin lif (SVM_admin_lif1) is only for the management of the SVM on physical port e0e.

For data I have two separated lifs (SVM_cifs_lif1 and SVM_nfs_lif1) on vlan ports (a0a-125 and a0a-808) on top of a interface group a0a. 

Also the cluster management lif (cluster_mgmt) is on the physical port e0e as is the SVM admin lif

 

BIG THANKS again

 

 

 

PS: I like your articles on blog

View solution in original post

11 REPLIES 11

Naveenpusuluru
13,303 Views

Hi @SorinAndruseac

 

Can you please paste the below command output.

 

::> net int show -fields dns-zone

 

You forgot to create dns-zone for data SVM lif's. Please find the procedure below.

 

 ::> network int modify -vserver svm-test -lif svm_test_cifs_01 -dns-zone svm-test.telerx.corp

 

Please create DNS-ZONE for all the lif's on that SVM and try to ping

 

Hope this will help you.

 

 

 

Naveenkumar Pusuluru

Storage Lead | C3i Healthcare connections

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

parisi
13,295 Views

What version of ONTAP?

 

We changed the way name resolution worked between 8.2.x and 8.3.x.

 

Specifically, 8.3.x removes the use of admin SVM for name services and uses only the data SVM.

 

Some prior versions of cDOT used the admin SVM for name services.

SorinAndruseac
13,290 Views

Ontap 8.3.2

 

I just tried to create a cifs server on a new Fas8020, and failed when I was suppose to add it the domain.

So this seems to me strange at least. I would dig more but I dont know in what direction

 

netapp11::> net interface show -fields firewall-policy
(network interface show)
vserver lif firewall-policy
------- ----------------- ---------------
Cluster netapp11-01_clus1
Cluster netapp11-01_clus2
Cluster netapp11-02_clus1
Cluster netapp11-02_clus2
SVM SVM_admin_lif1 mgmt
SVM SVM_cifs_lif1 mgmt
SVM SVM_nfs_lif1 mgmt
netapp11
cluster_mgmt mgmt
netapp11
netapp11-01_mgmt1 mgmt
netapp11
netapp11-02_mgmt1 mgmt
10 entries were displayed.

 

 

I was considering firewall policy but:

 

netapp11::> system service firewall policy show
Vserver Policy Service Allowed
------- ------------ ---------- -------------------
netapp11
data
dns 0.0.0.0/0
ndmp 0.0.0.0/0
ndmps 0.0.0.0/0
netapp11
intercluster
https 0.0.0.0/0
ndmp 0.0.0.0/0
ndmps 0.0.0.0/0
netapp11
mgmt
dns 0.0.0.0/0
http 0.0.0.0/0
https 0.0.0.0/0
ndmp 0.0.0.0/0
ndmps 0.0.0.0/0
ntp 0.0.0.0/0
snmp 0.0.0.0/0
ssh 0.0.0.0/0
14 entries were displayed.

 

 

Thanks in advance for any hints or ideas

 

 

 

parisi
13,270 Views

Cluster firewall is likely not your issue.

 

For one, mgmt firewall policy allows all traffic, including DNS.

 

What data protocols did you set on the admin lif?

 

Can you try to ping the hostname from the CIFS lif?

parisi
13,269 Views

Also, did you try to ping DNS by IP address? In  this example, you seem to be pinging the LIF's IP and not the DNS server's.

 

From the cluster SVM:

 

netapp11::cluster*> ping -lif cluster_mgmt -vserver netapp11 -destination dc -show-detail
PING dc.domain.local (10.207.255.11) from 10.207.251.175: 56 data bytes
64 bytes from 10.207.255.11: icmp_seq=0 ttl=127 time=0.693 ms
64 bytes from 10.207.255.11: icmp_seq=1 ttl=127 time=0.424 ms

 

From the data SVM:

 

netapp11::cluster*> ping -lif SVM_admin_lif1 -vserver SVM -destination 10.207.251.175 -show-detail
PING 10.207.251.175 (10.207.251.175) from 10.207.251.170: 56 data bytes
64 bytes from 10.207.251.175: icmp_seq=0 ttl=255 time=0.090 ms
64 bytes from 10.207.251.175: icmp_seq=1 ttl=255 time=0.094 ms

 

Did you add a default route?

 

 

 

Naveenpusuluru
13,258 Views

Please post the below command output

 

::> route show -vserver svm-test-cifs

SorinAndruseac
13,211 Views

netapp11::> route show -vserver netapp11
Vserver Destination Gateway Metric
------------------- --------------- --------------- ------
netapp11
0.0.0.0/0 10.207.251.250 20

netapp11::> route show -vserver SVM
Vserver Destination Gateway Metric
------------------- --------------- --------------- ------
SVM
0.0.0.0/0 10.207.251.250 20
0.0.0.0/0 10.207.255.250 20
2 entries were displayed.

SorinAndruseac
10,508 Views

 

Hello,

 

AND BIG THANKS!!!

 

This was the question that lead to the solution:

"Can you try to ping the hostname from the CIFS lif?"

 

 

 

On switch the Vlan taging was wrong and the cifs interface was not comunicating with anyone.

I was on the asumming that the configuration of the CIFS server is to be done on the management interface of the SVM and only the data traffic is done on the CIFS interface. Aparently the CIFS interface is involved somehow in the configuration process and, if it can't comunicate on the cifs interface, the name resolution on that SVM is not working at all.

It doesn't make complete sense for me ... is a new filer and I think this can be reproduced.

 

However when we corrected the VLAN tagging on the switch everything came to normal.

 

 

 

Below is the setup for the interfaces:

 

The admin lif (SVM_admin_lif1) is only for the management of the SVM on physical port e0e.

For data I have two separated lifs (SVM_cifs_lif1 and SVM_nfs_lif1) on vlan ports (a0a-125 and a0a-808) on top of a interface group a0a. 

Also the cluster management lif (cluster_mgmt) is on the physical port e0e as is the SVM admin lif

 

BIG THANKS again

 

 

 

PS: I like your articles on blog

parisi
10,451 Views

Glad it worked out for you.

 

My understanding  is that you need the data LIF to be able to route to the name services for things to work properly. The admin LIF likely had "data-protocol = none" which means it likely wasn't doing *any* name service traffic.

parisi
13,294 Views

DNS-zone on the data lif is *only* for use with on-box DNS. Don't set it unless you're using on-box DNS.

 

That feature is covered in TR-4523:

 

http://www.netapp.com/us/media/tr-4523.pdf

SorinAndruseac
13,292 Views

Hi,

 

And thanks for answering, here is the output 

 

netapp11::> net int show -fields dns-zone
(network interface show)
vserver lif dns-zone
------- ----------------- --------
Cluster netapp11-01_clus1 none
Cluster netapp11-01_clus2 none
Cluster netapp11-02_clus1 none
Cluster netapp11-02_clus2 none
SVM SVM_admin_lif1 none
SVM SVM_cifs_lif1 none
SVM SVM_nfs_lif1 none
netapp11
cluster_mgmt none
netapp11
netapp11-01_mgmt1 none
netapp11
netapp11-02_mgmt1 none

 

However I don't think is dns-zone related 🙂

Public