About FAS and V-Series Storage Systems Discussions
Talk and ask questions about NetApp FAS series unified storage systems and V-Series storage virtualization controllers. Discuss with other members how to optimize these powerful data storage systems.
About FAS and V-Series Storage Systems Discussions
Talk and ask questions about NetApp FAS series unified storage systems and V-Series storage virtualization controllers. Discuss with other members how to optimize these powerful data storage systems.
Hi! We have a FAS2750 with 12x 960Gb SAS Disks and one aggregation shared between both controller nodes (half of the disks in one controller, other half in second controller, buth only 1 total aggregation). We just brought 6x new 1.6Tb SSD's and I'm trying to create a new aggregation, but without success. Actually, almost. I'm using ONTAP GUI interface to create this new aggregation. Durin wizard, it say that is not possible to automatically determine the correct aggregation because we have less disks than what is needed. Ok. Them, I switch to manually create and proced on wizard. After all, I have onde aggregation with 5x disks + 1x spare. The problem is that all these disks are owned by only one controller node, not 'shared' like the other aggregation. So my question is: is possible to create a new aggregation type 'shared' with only 6x disks? (We don't want to mix SSD's and non-SSD's since we already have non-SSD's in production). You can see the difference here in this screenshot:
... View more
Dear Community, I recently upgrade our FAS 2240-4 system ( EOSL) from ONTAP 8.3.2P12 version to 9.1P20. All went well, until we started seeing our RHEL 7.7 machines encountering NFS4 mount issues after reboot. The NFS4 exports works well until the server actually rebooted. Suddenly the mount points doesn't show up with "df -h" command and this only happens after reboot of the Linux server. NFS3 works well with UDP protocol but not with NFS4. Please note, the export policy and export-policy check does show that the client has RW access. I do not see any errors on the NetApp logs. However, with pktt trace shows that NFS server throwing "NFS4ERR_DENIED" error. Please find below NetApp packet trace details excerpts from wireshark: 45 2.046195 10.XXX.XXX.156 10.XXX.XXX.36 NFS 394 NFS4_OK,NFS4_OK,NFS4_OK,NFS4_OK,NFS4_OK,NFS4_OK V4 Reply (Call In 44) OPEN StateID: 0x4dcd 46 2.046705 10.XXX.XXX.36 10.XXX.XXX.156 TCP 66 811 → 2049 [ACK] Seq=3053 Ack=3421 Win=24574 Len=0 TSval=1987690989 TSecr=467065747 47 2.046819 10.XXX.XXX.36 10.XXX.XXX.156 NFS 302 V4 Call (Reply In 48) LOCK FH: 0xf9bee644 Offset: 0 Length: <End of File> 48 2.047059 10.XXX.XXX.156 10.XXX.XXX.36 NFS 174 NFS4ERR_DENIED,NFS4_OK,NFS4ERR_DENIED V4 Reply (Call In 47) LOCK Status: NFS4ERR_DENIED Frame 48: 174 bytes on wire (1392 bits), 174 bytes captured (1392 bits) Ethernet II, Src: 02:xx:xx:36:xx:2e (02:xx:xx:36:xx:2e), Dst: Cisco_b8:00:fe (00:bf:77:b8:00:fe) Internet Protocol Version 4, Src: 10.XXX.XXX.156, Dst: 10.XXX.XXX.36 Transmission Control Protocol, Src Port: 2049, Dst Port: 811, Seq: 3421, Ack: 3289, Len: 108 Remote Procedure Call, Type:Reply XID:0xf6be4f45 Network File System, Ops(2): PUTFH LOCK(NFS4ERR_DENIED) [Program Version: 4] [V4 Procedure: COMPOUND (1)] Status: NFS4ERR_DENIED (10010) Tag: <EMPTY> Operations (count: 2) Opcode: PUTFH (22) Opcode: LOCK (12) Status: NFS4ERR_DENIED (10010) offset: 0 length: 18446744073709551615 locktype: WRITE_LT (2) Owner clientid: 0xb0cb14000000003d owner: <DATA> length: 20 contents: <DATA> [Main Opcode: LOCK (12)] I'm unable to understand why NetApp NFS server is showing up this error. To give you some perspective from the redhat client server is recently (gap of 1 week) we have installed McAfee AV as well. And, the server running IBM MQ application services in HA mode (meaning primary and secondary servers as active-standby). Please find below details of NFS4 mount error - [root@server101 user_name]# mount -vvv SVM:/vol/vol10/UAT_MQHA_MQ /UAT_MQHA_MQ mount.nfs: timeout set for Tue Nov 17 11:59:15 2020 mount.nfs: trying text-based options 'vers=4.1,addr=10.XXX.XXX.156,clientaddr=10.XXX.XXX.36' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options 'vers=4.0,addr=10.XXX.XXX.156,clientaddr=10.XXX.XXX.36' ^ It stuck at this step. Redhat says that it is NFS server which is causing this error. Unfortunately we do not have any support on NetApp. Can anyone please help me understand where could be the issue? Please note that redhat servers do not have any AD or LDAP authentication. Users are locally maintained. I would appreciate if the community helps me with the issue. Thanks much!
... View more
I'm drawing up rack layouts for a new installation. Storage rack is a FAS 8300 with two shelves to begin with but this will undoubtedly grow as the load does. In IBM Storage arrays, you put the controllers in the middle of the rack and daisy chain the disk shelves in two chains, above and below. The Netapp docs I've seen don't mention chaining below, they all have a single chain with the disks on top of the controller. What is the best way to lay out a netapp FAS8300 in the rack? Thanks Steven
... View more
Hello, We have recently bought 4 SSD drives and we are going to install them on an external disk shelf DS2246 which is connected with a dual controller FAS2552 storage system. My concern is : Should I do a firmware update of the disk shelf DS2246 before installing the disks ? If yes, how can I verify the existing shelf firmware and do the update ? Is it a non-disruptive process ? BR, Ilir
... View more