Talk with fellow users about the multiple protocols supported by NetApp unified storage including SAN, NAS, CIFS/SMB, NFS, iSCSI, S3 Object, Fibre-Channel, NVMe, and FPolicy.
Talk with fellow users about the multiple protocols supported by NetApp unified storage including SAN, NAS, CIFS/SMB, NFS, iSCSI, S3 Object, Fibre-Channel, NVMe, and FPolicy.
Hello all, I've been struggling to setup multipathing on an iSCSI boot LUN so after many days banging my head against a brick wall, I'm hoping somebody can point me in the right direction. Our development setup is as follows: FAS8200 running ONTAP 9.13.1P13 Cisco UCS X210c M7 blade server Oracle Linux 8.10 running 5.15.0-209.161.7.1.el8uek.x86_64 kernel Installation was fine, once I added ip=ibft to the boot command. I have setup two interfaces on the host and two on the FAS8200 vserver to cater for iSCSI and have established four paths: [root@vmhost-dev-b-02 ~]# iscsiadm --mode session tcp: [1] 10.31.5.101:3260,1026 iqn.1992-08.com.netapp:sn.c052244ddc6a11eeb08a00a098d45a03:vs.7 (non-flash) tcp: [2] 10.31.6.102:3260,1031 iqn.1992-08.com.netapp:sn.c052244ddc6a11eeb08a00a098d45a03:vs.7 (non-flash) tcp: [3] 10.31.6.101:3260,1030 iqn.1992-08.com.netapp:sn.c052244ddc6a11eeb08a00a098d45a03:vs.7 (non-flash) tcp: [4] 10.31.5.102:3260,1027 iqn.1992-08.com.netapp:sn.c052244ddc6a11eeb08a00a098d45a03:vs.7 (non-flash) I can see those sessions are logged on from the filer: fas8200a::> iscsi session show -vserver vsbidev1 -initiator-name iqn.2022-10.uk.ac.lboro:site-dev-b-iscsi-a:2 Tpgroup Initiator Initiator Vserver Name TSIH Name ISID Alias --------- ------- ---- ------------------------ --------- --------------------- vsbidev1 vsbidev1-01-a 2 iqn.2022-10.uk.ac.lboro:site-dev-b-iscsi-a:2 00:02:3d:00:00:01 vmhost-dev-b-02.lboro.ac.uk vsbidev1 vsbidev1-01-b 5 iqn.2022-10.uk.ac.lboro:site-dev-b-iscsi-a:2 00:02:3d:00:00:03 vmhost-dev-b-02.lboro.ac.uk vsbidev1 vsbidev1-02-a 2 iqn.2022-10.uk.ac.lboro:site-dev-b-iscsi-a:2 00:02:3d:00:00:04 vmhost-dev-b-02.lboro.ac.uk vsbidev1 vsbidev1-02-b 1 iqn.2022-10.uk.ac.lboro:site-dev-b-iscsi-a:2 00:02:3d:00:00:02 vmhost-dev-b-02.lboro.ac.uk 4 entries were displayed. Using NetApp's Linux Host Utilities I can see the LUN and four paths: [root@vmhost-dev-b-02 ~]# sanlun lun show controller(7mode/E-Series)/ device host lun vserver(cDOT/FlashRay) lun-pathname filename adapter protocol size product ---------------------------------------------------------------------------------------------------------- vsbidev1 /vol/vmhost_dev_b_02/rocky-dev-b-02 /dev/sdd host6 iSCSI 100g cDOT vsbidev1 /vol/vmhost_dev_b_02/rocky-dev-b-02 /dev/sdc host5 iSCSI 100g cDOT vsbidev1 /vol/vmhost_dev_b_02/rocky-dev-b-02 /dev/sdb host4 iSCSI 100g cDOT vsbidev1 /vol/vmhost_dev_b_02/rocky-dev-b-02 /dev/sda host3 iSCSI 100g cDOT But multipath refuses to see these devices as multipath devices: [root@vmhost-dev-b-02 ~]# multipath -l -v 3 Feb 13 14:26:14 | set open fds limit to 4096/262144 Feb 13 14:26:14 | loading /lib64/multipath/libchecktur.so checker Feb 13 14:26:14 | checker tur: message table size = 3 Feb 13 14:26:14 | loading /lib64/multipath/libprioconst.so prioritizer Feb 13 14:26:14 | foreign library "nvme" loaded successfully Feb 13 14:26:14 | sda: size = 209715200 Feb 13 14:26:14 | sda: vendor = NETAPP Feb 13 14:26:14 | sda: product = LUN C-Mode Feb 13 14:26:14 | sda: rev = 9131 Feb 13 14:26:14 | sda: h:b:t:l = 3:0:0:0 Feb 13 14:26:14 | sda: tgt_node_name = iqn.1992-08.com.netapp:sn.c052244ddc6a11eeb08a00a098d45a03:vs.7 Feb 13 14:26:14 | sdb: size = 209715200 Feb 13 14:26:14 | sdb: vendor = NETAPP Feb 13 14:26:14 | sdb: product = LUN C-Mode Feb 13 14:26:14 | sdb: rev = 9131 Feb 13 14:26:14 | sdb: h:b:t:l = 4:0:0:0 Feb 13 14:26:14 | sdb: tgt_node_name = iqn.1992-08.com.netapp:sn.c052244ddc6a11eeb08a00a098d45a03:vs.7 Feb 13 14:26:14 | sdc: size = 209715200 Feb 13 14:26:14 | sdc: vendor = NETAPP Feb 13 14:26:14 | sdc: product = LUN C-Mode Feb 13 14:26:14 | sdc: rev = 9131 Feb 13 14:26:14 | sdc: h:b:t:l = 5:0:0:0 Feb 13 14:26:14 | sdc: tgt_node_name = iqn.1992-08.com.netapp:sn.c052244ddc6a11eeb08a00a098d45a03:vs.7 Feb 13 14:26:14 | sdd: size = 209715200 Feb 13 14:26:14 | sdd: vendor = NETAPP Feb 13 14:26:14 | sdd: product = LUN C-Mode Feb 13 14:26:14 | sdd: rev = 9131 Feb 13 14:26:14 | sdd: h:b:t:l = 6:0:0:0 Feb 13 14:26:14 | sdd: tgt_node_name = iqn.1992-08.com.netapp:sn.c052244ddc6a11eeb08a00a098d45a03:vs.7 Feb 13 14:26:14 | dm-0: device node name blacklisted Feb 13 14:26:14 | dm-1: device node name blacklisted Feb 13 14:26:14 | dm-2: device node name blacklisted ===== paths list ===== uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st 3:0:0:0 sda 8:0 -1 undef undef NETAPP,LUN C-Mode unknown 4:0:0:0 sdb 8:16 -1 undef undef NETAPP,LUN C-Mode unknown 5:0:0:0 sdc 8:32 -1 undef undef NETAPP,LUN C-Mode unknown 6:0:0:0 sdd 8:48 -1 undef undef NETAPP,LUN C-Mode unknown Feb 13 14:26:14 | libdevmapper version 1.02.181-RHEL8 (2021-10-20) Feb 13 14:26:14 | DM multipath kernel driver v1.14.0 Feb 13 14:26:14 | unloading const prioritizer Feb 13 14:26:14 | unloading tur checker I suspect the issue has something to do with the fact that root is sitting on this disk which is preventing is being changed. I've read suggestions that I need to enable multipathing at boot time and change the paths to the various partitions but I'm not sure if that will work or not. Any help or guidance would be gratefully received. Regards, Mark
... View more
Hi, i have a CIFS SVM with 2 data LIFs in different subnets, they have no connection. Subnet x is the common client network and subnet y is a dedicated backup network for the backup-server. Wehen i now watching in the firewall logs, i can see that clients from subnet x want to connect also to the LIF ip in subnet y, which is denied, because the IP isn't accessable. How can i prevent that the IP from subnet y is reported to clients in subnet x? All communication in subnet x is working normal, all clients are accessing the CIFS SVM without problems. I'm wondering only about the events in the firewall and searching a way to prevent. Is there something i can configure? Kind regards Stefan
... View more
I followed the .\backupSharesAcls.ps and .\restoreSharesAcls.ps1 script to backup the share and ACL permissions from source volume and restore it on destination volume. Does the restore snapmirror destination should also have same SVM name as in production ? I created same volume name and namespace in destination but SVM name is different . Getting error record doesnt match. .\restoreSharesAcls.ps1 -server <clus_mgmt> -user <uname> -password <> -vserver <destination-vname> -shareFile C:\share.xml -aclFile C:\acl.xml -spit less Please advise on this. @scottharney
... View more
I'm trying to setup my svm server to talk Kerberos only (v9.15). My environment is setup to use aes-128 and aes-256 encryption and svm server has been joined to the domain. However, when I run command ...-lm-compatibility-level krb, ALL of my CIFS shares become inaccessible, I get re-prompted to enter my AD credentials and despite entering my credentials correctly, I never get in. I end up reverting back to ...-lm-compatibility-level ntlmv2-krb. Has anyone been able to set their CIFS shares to run krb traffic only? Added context: CIFS shares need to be visible to Windows Server 2022 server.
... View more
Hello Team, We are using NVMe over TCP with a NetApp ONTAP backend. On the host side, the network interfaces are configured as an LACP bond (802.3ad) with 2 × 25 Gbps NICs. I would like to confirm my understanding of how NVMe-TCP traffic utilizes bonded interfaces: NVMe-TCP uses multiple TCP connections (submission and completion queues). With LACP bonding, traffic distribution is based on a hashing algorithm (e.g., src/dst IP and ports). As a result, each individual TCP flow is pinned to a single physical link within the bond. Therefore, a single NVMe queue / TCP connection is limited to the bandwidth of one physical NIC (25 Gbps), and the aggregate 50 Gbps bandwidth is only achievable when multiple flows are distributed across both links. Is this understanding correct from an ONTAP and NVMe-TCP perspective? Additionally, are there any ONTAP-specific best practices or recommendations to ensure optimal link utilization and minimize latency for NVMe-TCP in bonded NIC environments? Thank you for your guidance.
... View more