Effective December 3, NetApp adopts Microsoft’s Business-to-Customer (B2C) identity management to simplify and provide secure access to NetApp resources.For accounts that did not pre-register (prior to Dec 3), access to your NetApp data may take up to 1 hour as your legacy NSS ID is synchronized to the new B2C identity.To learn more, read the FAQ and watch the video.Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.
When i check the cifs latency through cifs stat command the overall cifs latency for the filer is preety good i.e 3ms but when i check for individual cifs latency for eacj cifs share through performance advisor i find the values preety high .ie. around >1000ms at times . whrn i corelalte the overall cifs latency with indivual cifs latency for a share comparing the timestamp . i find that if the cifs latency for the filer is 3 ms , at the samree time for a particular cifs share the cifs latency would be 10000ms . how is cifs latency measured for the filer and for individual cifs shares
Pay close attention to the unit of measurement being used in different views of Performance Advisor. For example, when looking at the "Overall Latency Per Protocol" view for the controller, the CIFS counter is shown in miliseconds (you'd probably see 3ms here). However, if you drill-down to a individual volume that may be used for a CIFS share, and look at the "Overall Latency by Op Type", they are reported in MICROseconds, not millieconds. So you'd see something like 1,000-2,000 microseconds (1-to-2 ms) for the actual volume.
This isn't necessarily a bug, but its been requested that future versions of Performance Advisor show everything in milliseconds for consistency.
Overall average Filer CIFS latency will not provide you with much information beyond an average. I would think you would want to know which of your volumes are experiencing the worst latency. To figure that out you can download the Netapp Performance Manager from Ops Manager to see which volumes in your Enterprise have the most amount of latency (real time). With this information in hand you can then run cifs top (providing you enable options cifs.per_client_stats.enable on) to see if you have a particular user / app that is causing high I/O.