Talk with fellow users about the multiple protocols supported by NetApp unified storage including SAN, NAS, CIFS/SMB, NFS, iSCSI, S3 Object, Fibre-Channel, NVMe, and FPolicy.
Talk with fellow users about the multiple protocols supported by NetApp unified storage including SAN, NAS, CIFS/SMB, NFS, iSCSI, S3 Object, Fibre-Channel, NVMe, and FPolicy.
Hi All, How does NetApp in Cluster Mode, treat spaces in the command for path parameter. For example - cluster::> vserver locks show -vserver svm -path /vol/volume_name/qtree_name/01 abcd/xyz.xls Here the directory "01 abcd" space between 01 and abcd causing incorrect path. Any options to mitigate this. Please help.
... View more
Hi I have a question concerning SMB file audit delete events. We see two different types of events: EVENT_ID: 4659 "Open Object with the intent to delete" EVENT_ID: 4660 "Delete Object" When we delete a file, event 4659 is always generated, but 4660 not in every case. 4660 is created when deleting MS-Office .tmp files for example. We must to make sure to catch the correct event for the case: "user deletes a file" every time this happens. Can anyone tell my, how to do this? thx and regards sandsturm
... View more
Starting ONTAP 9.14.1 Release, ONTAP Supports NVMe Host QoS. Host QoS is part of an end-to-end QoS solution that allows setting up the QoS at the NVMe Subsystem Level. There are 2 types of QoS that can be set, higher regular This QoS is set at the Host level of the system there by giving a preferential treatment to the host at the transport and protocol level. This priority setting is applicable only for NVMe-TCP and NOT for NVMe-FC. High priority hosts can create controllers with more IO queue slots (queue count and depth) than regular priority hosts. And this feature is supported in Metro Cluster, here the Priority will be replicated. How to set the Priority? ONTAP_9.14.1::>vserver nvme subsystem create -subsystem SubsystemHostQoS -ostype linux -vserver NVMeVServer ONTAP_9.14.1::>vserver nvme subsystem host add -subsystem SubsystemHostQoS -host-nqn nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 -priority regular -vserver NVMeVServer ONTAP_9.14.1::> vserver nvme subsystem show -fields default-io-queue-count, default-io-queue-depth -subsystem SubsystemHostQoS vserver subsystem default-io-queue-count default-io-queue-depth -------- ---------------- ---------------------- ---------------------- smbcafdD SubsystemHostQoS 4 32 ONTAP_9.14.1::>
... View more
I did an upgrade on our AFF-C250 from 9.12.p8 to 9.12.p10 and received a bunch of error messages along the lines of "Your LIFs are non-redundant". When I checked on the switch, I found that the port-channel groups (we have 2) both showed that one interface was up and participating in LACP, and that the other connection in the group showed "suspended - No LACP PDUs". Has anyone encountered this? Things I've tried: 1. Shut / No shut the interface 2. Replaced the cable - twice 3. Replaced the SFPs 4. Default the port and recreate it on the Cisco side. 5. Delete the port-channel group and recreate on the Cisco side. 6. Configure a new port-channel group and new interfaces and moved cables there. I also opened a case with NetApp, but all that we've done is to delete the problematic port and then add it back. They seem to be ready to punt this to Cisco and, honestly, I don't blame them. While I first noticed the error during an upgrade, I can't be certain that's what caused it. JJC-NTAP::> ifgrp show Port Distribution Active Node IfGrp Function MAC Address Ports Ports -------- ---------- ------------ ----------------- ------- ------------------- JJC-NTAP-01 a0a ip d2:39:ea:56:cf:67 partial e0a, e0b JJC-NTAP-02 a0a ip d2:39:ea:56:d3:f7 partial e0a, e0b JJC-NTAP::> node run -node JJC-NTAP-01 ifconfig -v a0a a0a: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 uuid: f7eaeaab-8567-11ee-81dd-d039ea56cf67 options=4ec07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6,NOMAP> ether d2:39:ea:56:cf:67 pcp 4 media: Ethernet autoselect status: active groups: lagg laggproto lacp lagghash l3 lagg options: flags=4<USE_NUMA> flowid_shift: 16 lagg statistics: active ports: 1 flapping: 2 lag id: [(8000,D2-39-EA-56-CF-67,002B,0000,0000), (8000,84-78-AC-1D-C2-41,0012,0000,0000)] laggport: e0b flags=4<ACTIVE> state=d<ACTIVITY,AGGREGATION,SYNC> [(8000,D2-39-EA-56-CF-67,002B,8000,0008), (8000,84-78-AC-1D-C2-41,0012,8000,0111)] input/output LACPDUs: 181 / 325 laggport: e0a flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> state=3d<ACTIVITY,AGGREGATION,SYNC,COLLECTING,DISTRIBUTING> [(8000,D2-39-EA-56-CF-67,002B,8000,0009), (8000,84-78-AC-1D-C2-41,0012,8000,0A11)] input/output LACPDUs: 27709 / 828590 From the above commands we can see that the ifgrp is only partially participating in the LACP and that one of the ports in each port-channel is actually down. Any ideas? Thanks!
... View more