Hello and sorry for my english, When i am on ONTAP GUI, it ask for updat disk firmware : An update is available for Disk FW revision NA03 for disk model X387_WPSCE16TA07. An update is available for Disk FW revision NA03 for disk model X388_WPSCE16TA07. i can dismiss, update, schedule. If i update, i can have trouble in production ? second little question, it is neccessary to install last disk bundle after that or not ? i can have trouble when i install disk bundle ? thanks
... View more
Hi Community members, Need your valuable suggestion as always. I have a 10 node cluster which is highly utilized all times. Among those 2 nodes are hitting 80% on regular basis. As this a critical cluster I am unable to set a maintenance window for ONTAP upgrade. Vol move activity are not possible at the moment as need to upgrade cluster by next week. Any valuable suggestions please let me how to proceed with maintenance window. Is there any critical parameter like IOPs, latency which I can look into for performance and decide to set maintenance window. It should be non disruptive upgrade and Host team should not have any downtime during the activity. ONTAP Version upgrade planned from 9.11.1p8 to 9.11.1p16 to 9.15.1p16,it is a multi hop upgrade.
... View more
If we are planning to perform ONTAP code version from 9.11.1P6 to 9.15.1P16 for 10 node cluster please help what prechecks need to be taken care for upgrade readiness. Few points already validated: 1.Upgrade advisor shows incompatible switch is with target ONTAP. 2. Validated intercluster switch version current NXOS version is 9.3.5 so upgrade to NXOS 9.3.14. Current RCF is at 1.8 and is compatible with target ONTAP 9.15.1P16. 3. Validate current sp firmware and check whether current SP version is compatible with target ONTAP code version 9.15.1P16 but in SP compatability matrix not able to find 9.15.1P16 version to validate SP version compatibility. If upgrade is needed, Need to perform SP upgrade before ONTAP upgrade. 4.Upgrade path from 9.11.1P6 --> 9.11.1P16 --> 9.15.1P16 9.11.1P16 (this hop is for PANIC: page fault (supervisor read data, page not present) on VA 0x20 in process mlogd) --> Post 9.11.1P16 ONTAP perform bootarg disable to remediate pre check block Initialization of network interface failed on X91440A --> 9.15.1p16. is this a suggested path. 5.Most of the times Nodes are highly utilized(CPU crossing 50%). and Weekend we notice node utilization is stable below 50% for only 2 hours, whether we are can prefer to go with this 2 hours for ONTAP upgrade as it is a 10 node cluster. 6. Post ONTAP upgrade of 9.15.1P16 perform disk firmware, disk shelf and DQP upgrade. 7. Is the upgrade sequence correct: Intercluster switch upgrade - sp firmware - ONTAP upgrade - disk upgrade - disk shelf -DQP upgrade. In addition to above points is there any additional checks/ground rules or requirements needed to perform ONTAP upgrade. And also advise on revert process whether it is disruptive. Any help is very much appreciated!!!!!
... View more
Hi guys, I got a really weird problem here... I got a Netapp ontap 9 CIFS server with 4 lifs, on 3 VLANs: NETAPP2::> network interface show -vserver CIFS_01 Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- CIFS_01 CIFS_01_av_conn_no2 up/up 10.18.0.201/21 NETAPP2-02 a0a-2 true CIFS_01_bkp up/up 10.18.8.77/27 NETAPP2-01 a0a-61 true CIFS_01_servers up/up 10.18.2.200/21 NETAPP2-01 a0a-2 true CIFS_01_usuarios up/up 10.18.10.18/29 NETAPP2-02 a0a-5 true 4 entries were displayed. I got clients on all 3 VLANs... Also, I got a number of shares... After I did some volume moves, I had problems to access some shares, which I solved moving one of those lifs to another node (I think part of my current problem lies on this... still I can't figure it out...). I got one specific share, "backup_conf", that it's volume is hosted on node 01: NETAPP2::> vol show -volume share_backup_conf -fields node vserver volume node ------- ----------------- ---------- CIFS_01 share_backup_conf NETAPP2-01 The share: NETAPP2::> share show -share-name backup_conf -instance (vserver cifs share show) Vserver: CIFS_01 Share: backup_conf CIFS Server NetBIOS Name: CIFS_01 Path: /backup_conf Share Properties: oplocks browsable changenotify show-previous-versions Symlink Properties: symlinks File Mode Creation Mask: - Directory Mode Creation Mask: - Share Comment: serviço operacional 22249 compartilhamento para backup de configuração a pedido do Chaves Share ACL: Everyone / Full Control Guest / Full Control File Attribute Cache Lifetime: - Volume Name: - Offline Files: manual Vscan File-Operations Profile: standard Maximum Tree Connections on Share: 4294967295 UNIX Group for File Create: - NETAPP2::> So... I got a client on VLAN a0a-61, it can access that share on that VLAN... my other clients on VLAN a0a-2 can't access it... Ontap Firewall is disabled (also don't worked when it was enabled...), I got no access restrictions related to CIFS lifs... So I can't figure out why clients on VLAN 2 can't access this share thru VLAN 2 while at the same time they still can access every other share, while clients on VLA 61 are able to access every share even this one thru VLAN 61... Also, all clients on all VLANs can use "smbclient" command to "list" to every share on the CIFS server using the filler IP on their VLAN, but when I try to mount using the same options under SMB 3.0 (only one enable), using the SAME AD credentials, clients on vlan other then 61 can't mount that specific "backup_conf" share... Could there be something wrong with on of my nodes? ARP table (I really out of ideas now....) ... SMB Client command that works on every host (VLAN 2 and 61) and shows every share, with no problem: [root@bacula-sc ~]# history | grep smbcli 1341 2026-01-19 15:41:48 smbclient -L 10.18.2.200 -U us-bacula -W trt18 1342 2026-01-19 15:42:32 smbclient -L 10.18.8.87 -U us-bacula -W trt18 1344 2026-01-19 15:42:47 smbclient -L 10.18.8.77 -U us-bacula -W trt18 1345 2026-01-19 15:44:40 smbclient -L 10.18.2.200 -U us-bacula -W trt18 1356 2026-01-19 16:40:19 history | grep smbclient 1357 2026-01-19 16:40:33 smbclient -L 10.18.0.201 -U us-bacula -W trt18 1360 2026-01-19 16:40:54 smbclient -L 10.18.0.201 -U us-bacula -W trt18 1362 2026-01-19 16:53:18 history | grep smbcli Mount command that only works on VLAN 61 clients: mount -t nfs -o rw,bg,hard,nfsvers=3,tcp,rsize=1048576,wsize=1048576,noatime,nodiratime,nconnect=8 10.18.8.73:/nfs_sasa_0 /mnt/test_netapp Mount command that DON'T work anymore on VLAN 2 clients, but should work, it worked till JAN 15, when I moved some vols and got some errors: mount -t nfs -o rw,bg,hard,nfsvers=3,tcp,rsize=1048576,wsize=1048576,noatime,nodiratime,nconnect=8 10.18.2.200:/nfs_sasa_0 /mnt/test_netapp ps.: Ok, I got it, something went foobar on day 15, but I'm also a little lost on Netapp logs (there are too many logs...)... Can someone give me some advice?
... View more
NetApp C30 storage system with two nodes. Two aggregates have been created as per best practices, each with approximately 99.3 TB of usable capacity. Aggregate 1 hosts a CIFS/SMB FlexVol that is currently utilizing around 80 TB out of 99.3 TB. Aggregate 2 hosts an iSCSI volume that is utilizing approximately 10 TB out of 99.3 TB, leaving significant free capacity available. The CIFS share is accessed by users via the mapped network path: \\192.168.30.101\dxb_data\Data All user data resides within a single qtree (without quotas), and the volume junction path is /DXB_DATA, Storage efficency is enable Due to capacity growth on Aggregate 1, I would like to utilize aggregate free capacity for existing CIFS shares. The key requirement is to ensure that end users can continue accessing the HR folder using the existing mapped drive path, with no disruption to service and no changes required on the client side.
... View more