Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I recently migrated my user's UAT NFS share from one SVM (9.9.1P12) with a regular flexvol volume to a new SVM (9.12.1P2) with a flexcache volume. Then, a few days later, their jobs are having issues and throwing out "too many open files" error messages. The only change is this Netapp migration. Any ideas? The Linux clients are Centos 7.9.
Thanks,
Steve
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I should point out that these failing processes do not fail right away; they work, then a few days later, they fail -- and we have no clue why (we cannot pin it to any other environmental issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
1) Do you see any "event logs" related to this error on the (netapp side) ONTAP console via CLI ?
2) Were NFS clients connected during migration of NFS share to new SVM?
3) Could you try - unmounting & remounting NFS share (if not done already)?
Related kb:
LINUX client reports error "too many open files" after ONTAP upgrade:
https://kb.netapp.com/onprem/ontap/hardware/LINUX_client_reports_error_%22too_many_open_files%22_after_ONTAP_upgrade
https://access.redhat.com/solutions/2469
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks @Ontapforrum !
1. no event logs related to this that we know of since it happens "whenever" and never often enough that we can react or know exactly when it occurred; but, we think we now have a way to capture the timestamp (w/in 30m) but the problem hasn't re-surfaced (the app team also made a change on their side).
2. yes and no -- only two clients and they were rebooted as part of the migration *before* this issue even surfaced (it surfaced days after the migration event); also, these two Linux hosts reboot weekly.
3. already done (see answer 2).
