<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic CIFS multi point performance issues in Network and Storage Protocols</title>
    <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/CIFS-multi-point-performance-issues/m-p/132951#M8829</link>
    <description>&lt;P&gt;&lt;STRONG&gt;Settup&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;FAS 2552 with additional shelf Running ONTAP 9.1P2 (24x2 disk 10k SAS)&lt;/P&gt;&lt;P&gt;2 Heads 1 10G per head&lt;/P&gt;&lt;P&gt;FAS 2552 &amp;lt;&amp;gt; 16port 10G NetApp switch &amp;lt;&amp;gt; 10G Trunk &amp;lt;&amp;gt; CISCO 3850X&amp;nbsp;SW&amp;nbsp;(48p 1G 2x10G) &amp;lt;&amp;gt; several servers at 1G&lt;/P&gt;&lt;P&gt;Servers are 2012r2, clients Windows 10 (all 1G connections from the same switch hosting the 10G trunk to NetApp sw)&lt;/P&gt;&lt;P&gt;_______________________&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm fairly new to CDOT and have limited experiance with network storage.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My issue is that the CIFS share on this device are&amp;nbsp;caping out at about 1G throughput. IE Server1&amp;nbsp;starts a file copy from the share at full Gig speeds, Server2 begins copying from it and the transfer rate drops to half a Gig for each. I can copy from multiple NetApps at full 1G each but am unable to get any NetApp&amp;nbsp;share to exceed 1G througput. &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To isolate this to a CIFS issue I had three&amp;nbsp;servers and the FAS conect with ISCSI. After creating three&amp;nbsp;LUNS and connecting them to three physicaly seperate servers on the same switch, I observed 3 seperate full 1Gig connections simultainous.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The network should be capable of pushing around 10 1G connections through that trunk port as it did for the ISCSI connection, not sure what to look at next.&lt;/P&gt;&lt;P&gt;This is our current "vserver cifs options show" with advanced priv set.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Vserver: svm1

                            Client Session Timeout: 900
                              Copy Offload Enabled: true
                                Default Unix Group: -
                                 Default Unix User: pcuser
                                   Guest Unix User: -
               Are Administrators mapped to 'root': true
           Is Advanced Sparse File Support Enabled: true
                  Direct-Copy Copy Offload Enabled: true
                           Export Policies Enabled: false
            Grant Unix Group Permissions to Others: false
                          Is Advertise DFS Enabled: false
     Is Client Duplicate Session Detection Enabled: true
               Is Client Version Reporting Enabled: true
                                    Is DAC Enabled: false
                      Is Fake Open Support Enabled: true
                         Is Hide Dot Files Enabled: false
                              Is Large MTU Enabled: true
                             Is Local Auth Enabled: true
                 Is Local Users and Groups Enabled: true
            Is NetBIOS over TCP (port 139) Enabled: true
               Is NBNS over UDP (port 137) Enabled: false
                               Is Referral Enabled: false
             Is Search Short Names Support Enabled: false
  Is Trusted Domain Enumeration And Search Enabled: true
                        Is UNIX Extensions Enabled: false
          Is Use Junction as Reparse Point Enabled: true
                               Max Multiplex Count: 255
              Max Same User Session Per Connection: 2050
                 Max Same Tree Connect Per Session: 4096
                      Max Opens Same File Per Tree: 800
                          Max Watches Set Per Tree: 100
                   Is Path Component Cache Enabled: true
    NT ACLs on UNIX Security Style Volumes Enabled: true
                                  Read Grants Exec: disabled
                                  Read Only Delete: disabled
                  Reported File System Sector Size: 4096
                                Restrict Anonymous: no-restriction
                              Shadowcopy Dir Depth: 5
                                Shadowcopy Enabled: true
                                      SMB1 Enabled: true
                  Max Buffer Size for SMB1 Message: 65535
                                      SMB2 Enabled: true
                                      SMB3 Enabled: true
                                    SMB3.1 Enabled: true
            Map Null User to Windows User or Group: -
                                      WINS Servers: -
         Report Widelink as Reparse Point Versions: SMB1&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 04 Jun 2025 14:49:20 GMT</pubDate>
    <dc:creator>JohnsonSean</dc:creator>
    <dc:date>2025-06-04T14:49:20Z</dc:date>
    <item>
      <title>CIFS multi point performance issues</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/CIFS-multi-point-performance-issues/m-p/132951#M8829</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Settup&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;FAS 2552 with additional shelf Running ONTAP 9.1P2 (24x2 disk 10k SAS)&lt;/P&gt;&lt;P&gt;2 Heads 1 10G per head&lt;/P&gt;&lt;P&gt;FAS 2552 &amp;lt;&amp;gt; 16port 10G NetApp switch &amp;lt;&amp;gt; 10G Trunk &amp;lt;&amp;gt; CISCO 3850X&amp;nbsp;SW&amp;nbsp;(48p 1G 2x10G) &amp;lt;&amp;gt; several servers at 1G&lt;/P&gt;&lt;P&gt;Servers are 2012r2, clients Windows 10 (all 1G connections from the same switch hosting the 10G trunk to NetApp sw)&lt;/P&gt;&lt;P&gt;_______________________&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm fairly new to CDOT and have limited experiance with network storage.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My issue is that the CIFS share on this device are&amp;nbsp;caping out at about 1G throughput. IE Server1&amp;nbsp;starts a file copy from the share at full Gig speeds, Server2 begins copying from it and the transfer rate drops to half a Gig for each. I can copy from multiple NetApps at full 1G each but am unable to get any NetApp&amp;nbsp;share to exceed 1G througput. &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To isolate this to a CIFS issue I had three&amp;nbsp;servers and the FAS conect with ISCSI. After creating three&amp;nbsp;LUNS and connecting them to three physicaly seperate servers on the same switch, I observed 3 seperate full 1Gig connections simultainous.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The network should be capable of pushing around 10 1G connections through that trunk port as it did for the ISCSI connection, not sure what to look at next.&lt;/P&gt;&lt;P&gt;This is our current "vserver cifs options show" with advanced priv set.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Vserver: svm1

                            Client Session Timeout: 900
                              Copy Offload Enabled: true
                                Default Unix Group: -
                                 Default Unix User: pcuser
                                   Guest Unix User: -
               Are Administrators mapped to 'root': true
           Is Advanced Sparse File Support Enabled: true
                  Direct-Copy Copy Offload Enabled: true
                           Export Policies Enabled: false
            Grant Unix Group Permissions to Others: false
                          Is Advertise DFS Enabled: false
     Is Client Duplicate Session Detection Enabled: true
               Is Client Version Reporting Enabled: true
                                    Is DAC Enabled: false
                      Is Fake Open Support Enabled: true
                         Is Hide Dot Files Enabled: false
                              Is Large MTU Enabled: true
                             Is Local Auth Enabled: true
                 Is Local Users and Groups Enabled: true
            Is NetBIOS over TCP (port 139) Enabled: true
               Is NBNS over UDP (port 137) Enabled: false
                               Is Referral Enabled: false
             Is Search Short Names Support Enabled: false
  Is Trusted Domain Enumeration And Search Enabled: true
                        Is UNIX Extensions Enabled: false
          Is Use Junction as Reparse Point Enabled: true
                               Max Multiplex Count: 255
              Max Same User Session Per Connection: 2050
                 Max Same Tree Connect Per Session: 4096
                      Max Opens Same File Per Tree: 800
                          Max Watches Set Per Tree: 100
                   Is Path Component Cache Enabled: true
    NT ACLs on UNIX Security Style Volumes Enabled: true
                                  Read Grants Exec: disabled
                                  Read Only Delete: disabled
                  Reported File System Sector Size: 4096
                                Restrict Anonymous: no-restriction
                              Shadowcopy Dir Depth: 5
                                Shadowcopy Enabled: true
                                      SMB1 Enabled: true
                  Max Buffer Size for SMB1 Message: 65535
                                      SMB2 Enabled: true
                                      SMB3 Enabled: true
                                    SMB3.1 Enabled: true
            Map Null User to Windows User or Group: -
                                      WINS Servers: -
         Report Widelink as Reparse Point Versions: SMB1&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 14:49:20 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/CIFS-multi-point-performance-issues/m-p/132951#M8829</guid>
      <dc:creator>JohnsonSean</dc:creator>
      <dc:date>2025-06-04T14:49:20Z</dc:date>
    </item>
  </channel>
</rss>

