Talk with fellow users about the multiple protocols supported by NetApp unified storage including SAN, NAS, CIFS/SMB, NFS, iSCSI, S3 Object, Fibre-Channel, NVMe, and FPolicy.
Talk with fellow users about the multiple protocols supported by NetApp unified storage including SAN, NAS, CIFS/SMB, NFS, iSCSI, S3 Object, Fibre-Channel, NVMe, and FPolicy.
Hello, I want to know specific cifs share size allocation. How can I find its size? Does it have any relation with volume? Is it possible to consider Volume size as cifs share size? Regards, Atish Lohade
... View more
Hello, In the past, we relied on McAfee & the antivirus connector to get vscan to work for our filers, but McAfee was retired and replaced with Trellix and i'm having difficulty getting that to work with vscan again. Please advise! Trellix is fully downloaded, and it's been given the IP for our machine but that's about it. I'm not sure how to proceed from here
... View more
our customer has 2 different domain based on the business unit, few NetApp CIFS shares has to be migrated to the new domain from older. To ease the migration task, we have re-created the security group with the same name on new domain and added users in it. The share folders permissions are still associated with security group of old domain, we're looking to re-acl the security group permission according to the new domain.. Please help if any one has an idea, thanks for your help!
... View more
Hi, I have a couple of new A400 systems. After upgrading them to Ontap 9.13.1.P1 I noticed that the health was degraded on the link aggregation port and the member ports. There are an KB about this, Native vlan ports show degraded after upgrade to ONTAP 9.13.1 - NetApp Knowledge Base But I don't understand how to solve it, should I create a "native vlan" in the Cisco switches config for the lacp Etherchannel? I seems to just be a cosmetic problem, I can ignore the health status (net port modify -node <node> -port <port> -ignore-health-status true) This is the status today for one of a new cluster (not in production). Should the ifgrp ports (a0a, e0e, e0f) be in a broadcast domain? cl05::> network port ifgrp show -node cl05-01 Port Distribution Active Node IfGrp Function MAC Address Ports Ports -------- ---------- ------------ ----------------- ------- ------------------- cl05-01 a0a ip xx:xx:xx:xx:xx:xx full e0e, e0f cl05::> network port show -node cl05-01 Node: cl05-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- a0a Default - up 1500 -/- degraded a0a-10 Default vlan-10 up 1500 -/- healthy a0a-12 Default vlan-12 up 1500 -/- healthy a0a-18 Default vlan-18 up 1500 -/- healthy a0a-1905 Default vlan-1905 up 1500 -/- healthy a0a-20 Default vlan-20 up 1500 -/- healthy a0a-51 Default Default up 1500 -/- healthy e0M Default Default up 1500 auto/1000 healthy e0e Default - up 1500 auto/10000 degraded e0f Default - up 1500 auto/10000 degraded e0g Default - down 1500 auto/- - e0h Default - down 1500 auto/- - e3a Cluster Cluster up 9000 auto/100000 healthy e3b Cluster Cluster up 9000 auto/100000 healthy 14 entries were displayed. cl05::> cl05::> network port reachability show -node cl05-01 Node Port Expected Reachability Reachability Status --------- -------- ------------------------------------ --------------------- cl05-01 a0a - no-reachability a0a-10 Default:vlan-10 ok a0a-12 Default:vlan-12 ok a0a-18 Default:vlan-18 ok a0a-1905 Default:vlan-1905 ok a0a-20 Default:vlan-20 ok a0a-51 Default:Default ok e0M Default:Default ok e0e - no-reachability e0f - no-reachability e0g - no-reachability e0h - no-reachability e3a Cluster:Cluster ok e3b Cluster:Cluster ok
... View more
Hi Folks, I'm making this post hot on the heels of yet another network blip bringing down NFS hard mounts across a bunch of Linux systems. Most of our systems are reasonably modern, Ubuntu 20.04 LTS & RHEL 7. The mount arguments are: rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=*.*.*.*,local_lock=none,addr=*.*.*.*,_netdev I believe those are pretty much the default - we set these in /etc/fstab: nfsvers=4.1,defaults,_netdev,nofail Sadly we don't have any sort of dedicated NFS network, and our NFS shares are exported on one vlan and have to be routed through generally 1 intermediate network firewall to get to the client. Hard to get around this given the network we are stuck with. Any advice is welcome - one thing I was thinking about doing was really pushing the timeo value - maybe 1 hour total by setting timeo=12000,retrans=2 - or timeo=600,retrans=60 ? When our network has a problem it's usually only a problem for about 15-20 minutes. Thanks!
... View more