<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Too many open files with flexcache in Network and Storage Protocols</title>
    <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445931#M9898</link>
    <description>&lt;P&gt;1) Do you see any "event logs" related to this error on the (netapp side) ONTAP console via CLI ?&lt;BR /&gt;2) Were NFS clients connected during migration of NFS share to new SVM?&lt;BR /&gt;3) Could you try - unmounting &amp;amp; remounting NFS share (if not done already)?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Related kb:&lt;/P&gt;&lt;P&gt;LINUX client reports error "too many open files" after ONTAP upgrade:&lt;BR /&gt;&lt;A href="https://kb.netapp.com/onprem/ontap/hardware/LINUX_client_reports_error_%22too_many_open_files%22_after_ONTAP_upgrade" target="_blank"&gt;https://kb.netapp.com/onprem/ontap/hardware/LINUX_client_reports_error_%22too_many_open_files%22_after_ONTAP_upgrade&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;A href="https://access.redhat.com/solutions/2469" target="_blank"&gt;https://access.redhat.com/solutions/2469&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 14 Jul 2023 20:27:32 GMT</pubDate>
    <dc:creator>Ontapforrum</dc:creator>
    <dc:date>2023-07-14T20:27:32Z</dc:date>
    <item>
      <title>Too many open files with flexcache</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445922#M9896</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I recently migrated my user's UAT NFS share from one SVM (9.9.1P12) with a regular flexvol volume to a new SVM (9.12.1P2) with a flexcache volume. Then, a few days later, their jobs are having issues and throwing out "too many open files" error messages. The only change is this Netapp migration. Any ideas? The Linux clients are Centos 7.9.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Steve&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 09:47:05 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445922#M9896</guid>
      <dc:creator>SCL</dc:creator>
      <dc:date>2025-06-04T09:47:05Z</dc:date>
    </item>
    <item>
      <title>Re: Too many open files with flexcache</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445923#M9897</link>
      <description>&lt;P&gt;I should point out that these failing processes do not fail right away; they work, then a few days later, they fail -- and we have no clue why (we cannot pin it to any other environmental issue.&lt;/P&gt;</description>
      <pubDate>Fri, 14 Jul 2023 18:18:59 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445923#M9897</guid>
      <dc:creator>SCL</dc:creator>
      <dc:date>2023-07-14T18:18:59Z</dc:date>
    </item>
    <item>
      <title>Re: Too many open files with flexcache</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445931#M9898</link>
      <description>&lt;P&gt;1) Do you see any "event logs" related to this error on the (netapp side) ONTAP console via CLI ?&lt;BR /&gt;2) Were NFS clients connected during migration of NFS share to new SVM?&lt;BR /&gt;3) Could you try - unmounting &amp;amp; remounting NFS share (if not done already)?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Related kb:&lt;/P&gt;&lt;P&gt;LINUX client reports error "too many open files" after ONTAP upgrade:&lt;BR /&gt;&lt;A href="https://kb.netapp.com/onprem/ontap/hardware/LINUX_client_reports_error_%22too_many_open_files%22_after_ONTAP_upgrade" target="_blank"&gt;https://kb.netapp.com/onprem/ontap/hardware/LINUX_client_reports_error_%22too_many_open_files%22_after_ONTAP_upgrade&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;A href="https://access.redhat.com/solutions/2469" target="_blank"&gt;https://access.redhat.com/solutions/2469&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 14 Jul 2023 20:27:32 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445931#M9898</guid>
      <dc:creator>Ontapforrum</dc:creator>
      <dc:date>2023-07-14T20:27:32Z</dc:date>
    </item>
    <item>
      <title>Re: Too many open files with flexcache</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445933#M9899</link>
      <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/73493"&gt;@Ontapforrum&lt;/a&gt;&amp;nbsp;!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. no event logs related to this that we know of since it happens "whenever" and never often enough that we can react or know exactly when it occurred; but, we think we now have a way to capture the timestamp (w/in 30m) but the problem hasn't re-surfaced (the app team also made a change on their side).&lt;/P&gt;&lt;P&gt;2. yes and no -- only two clients and they were rebooted as part of the migration *before* this issue even surfaced (it surfaced days after the migration event); also, these two Linux hosts reboot weekly.&lt;/P&gt;&lt;P&gt;3. already done (see answer 2).&lt;/P&gt;</description>
      <pubDate>Fri, 14 Jul 2023 21:05:36 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Too-many-open-files-with-flexcache/m-p/445933#M9899</guid>
      <dc:creator>SCL</dc:creator>
      <dc:date>2023-07-14T21:05:36Z</dc:date>
    </item>
  </channel>
</rss>

