I am having trouble with a replacement disk on a FAS3240 (NetApp Release 8.2.3P4 7-Mode). I have replaced the bad disk but it appears that the replacement disk has been formatted at 512B instead of 520B. The replacement disk is a certified NetApp drive so I know it is originally from NetApp and has NetApp firmware. We did get this one from a third-party vendor because we are not currently under NetApp support. We have had the disk on the shelf too long to return it now so we are stuck with it. I uploaded the latest qual_devices files to both controllers just in case those are needed. I know the code version is old but it is what we are stuck with now. Is there a way to change the replacement disk to 520B?
12.5 : NETAPP X412_S15K7560A15 NA08 560.0GB 520B/sect (6SL78JFM0000N4121LM9) 12.6 : NETAPP X412_S15K7560A15 NA07 560.0GB 512B/sect (Failed) <---- 512B/sector instead of 512B 12.7 : NETAPP X412_S15K7560A15 NA08 560.0GB 520B/sect (6SL78KWT0000N4130ZZ7)
[Filer1:disk.init.badSectorSize:error]: Disk 3c.12.6 has an unexpected sector size (512 bytes) and cannot be used.
[Filer1:disk.init.failure.error:warning]: Disk 3c.12.6 failed initialization due to error 0.
... View more
Hello, I have to replace a controller that has Ethernet ports down. Is there any replacement guide or suggestions regarding controller replacement on FAS2552 dual-controller Clustered Mode ONTAP 9.1 ? It is an old system. Is firmware update mandatory or not ? BR, Ilir
... View more
Hi! We have a FAS2750 with 12x 960Gb SAS Disks and one aggregation shared between both controller nodes (half of the disks in one controller, other half in second controller, buth only 1 total aggregation). We just brought 6x new 1.6Tb SSD's and I'm trying to create a new aggregation, but without success. Actually, almost. I'm using ONTAP GUI interface to create this new aggregation. Durin wizard, it say that is not possible to automatically determine the correct aggregation because we have less disks than what is needed. Ok. Them, I switch to manually create and proced on wizard. After all, I have onde aggregation with 5x disks + 1x spare. The problem is that all these disks are owned by only one controller node, not 'shared' like the other aggregation. So my question is: is possible to create a new aggregation type 'shared' with only 6x disks? (We don't want to mix SSD's and non-SSD's since we already have non-SSD's in production). You can see the difference here in this screenshot:
... View more
Dear Community, I recently upgrade our FAS 2240-4 system ( EOSL) from ONTAP 8.3.2P12 version to 9.1P20. All went well, until we started seeing our RHEL 7.7 machines encountering NFS4 mount issues after reboot. The NFS4 exports works well until the server actually rebooted. Suddenly the mount points doesn't show up with "df -h" command and this only happens after reboot of the Linux server. NFS3 works well with UDP protocol but not with NFS4. Please note, the export policy and export-policy check does show that the client has RW access. I do not see any errors on the NetApp logs. However, with pktt trace shows that NFS server throwing "NFS4ERR_DENIED" error. Please find below NetApp packet trace details excerpts from wireshark: 45 2.046195 10.XXX.XXX.156 10.XXX.XXX.36 NFS 394 NFS4_OK,NFS4_OK,NFS4_OK,NFS4_OK,NFS4_OK,NFS4_OK V4 Reply (Call In 44) OPEN StateID: 0x4dcd 46 2.046705 10.XXX.XXX.36 10.XXX.XXX.156 TCP 66 811 → 2049 [ACK] Seq=3053 Ack=3421 Win=24574 Len=0 TSval=1987690989 TSecr=467065747 47 2.046819 10.XXX.XXX.36 10.XXX.XXX.156 NFS 302 V4 Call (Reply In 48) LOCK FH: 0xf9bee644 Offset: 0 Length: <End of File> 48 2.047059 10.XXX.XXX.156 10.XXX.XXX.36 NFS 174 NFS4ERR_DENIED,NFS4_OK,NFS4ERR_DENIED V4 Reply (Call In 47) LOCK Status: NFS4ERR_DENIED Frame 48: 174 bytes on wire (1392 bits), 174 bytes captured (1392 bits) Ethernet II, Src: 02:xx:xx:36:xx:2e (02:xx:xx:36:xx:2e), Dst: Cisco_b8:00:fe (00:bf:77:b8:00:fe) Internet Protocol Version 4, Src: 10.XXX.XXX.156, Dst: 10.XXX.XXX.36 Transmission Control Protocol, Src Port: 2049, Dst Port: 811, Seq: 3421, Ack: 3289, Len: 108 Remote Procedure Call, Type:Reply XID:0xf6be4f45 Network File System, Ops(2): PUTFH LOCK(NFS4ERR_DENIED) [Program Version: 4] [V4 Procedure: COMPOUND (1)] Status: NFS4ERR_DENIED (10010) Tag: <EMPTY> Operations (count: 2) Opcode: PUTFH (22) Opcode: LOCK (12) Status: NFS4ERR_DENIED (10010) offset: 0 length: 18446744073709551615 locktype: WRITE_LT (2) Owner clientid: 0xb0cb14000000003d owner: <DATA> length: 20 contents: <DATA> [Main Opcode: LOCK (12)] I'm unable to understand why NetApp NFS server is showing up this error. To give you some perspective from the redhat client server is recently (gap of 1 week) we have installed McAfee AV as well. And, the server running IBM MQ application services in HA mode (meaning primary and secondary servers as active-standby). Please find below details of NFS4 mount error - [root@server101 user_name]# mount -vvv SVM:/vol/vol10/UAT_MQHA_MQ /UAT_MQHA_MQ mount.nfs: timeout set for Tue Nov 17 11:59:15 2020 mount.nfs: trying text-based options 'vers=4.1,addr=10.XXX.XXX.156,clientaddr=10.XXX.XXX.36' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options 'vers=4.0,addr=10.XXX.XXX.156,clientaddr=10.XXX.XXX.36' ^ It stuck at this step. Redhat says that it is NFS server which is causing this error. Unfortunately we do not have any support on NetApp. Can anyone please help me understand where could be the issue? Please note that redhat servers do not have any AD or LDAP authentication. Users are locally maintained. I would appreciate if the community helps me with the issue. Thanks much!
... View more