In best practices for vSphere Netapp recommends to set vol options 'volume' no_atime_update on for all volumes - but does this setting also apply for volume that keeps CIFS shares (directly from filer)?
As far as I am concerned this option should be set on NFS volumes which hold vmdk's etc, but it does not say anything about CIFS.
Right now I am struggling with very poor filer performance on CIFS, and I am wondering what consequences would this command have for currently running CIFS which shows (with sysstat -x 2 command) avg 3000-4500 IO/s ? Will it help anyhow? Filer is doing over 90% of CPU currently so any help with performance will be great till we switch to new boxes. The most important thing for me is: does this command cause any "time problems" to connected cifs clients or files itself?
would for instance in that case enabling options minra on that volume (around 2TB) help to gain some better results?
thank you in advance for any replies
For CIFS we do not typically turn off access time updates. If you are using any kind of reporting or metrics based on user access, or if clients are depending on access times, you probably want to leave the option as is. You could create a FlexClone of the volume then change atime_update and do some testing, but with the system already constrained that might not be feasible.
I would open a case and run a perfstat with support to find the bottleneck (if only CPU at 90% that might not be the only issue) and determine next steps to fixing the performance issue (options changes, reallocation, additional disk, controller upgrade, etc.).
ok thanks for answer, but how to verify if clients connecting to cifs depend on access times or not?
i must say that i am already fed up with Netapp support and their way of troubleshooting. They blame all problems by misaligned vms.
Sent from my iPhone
Typically home directory users benefit from having access times and it could be a problem if turned off.
Misaligned VMs are often a big issue and we can see them clearly in perfstat with pw_partial writes...it is feasible that misaligned vms are causing issues for other workloads like cifs if on the same controller. Sometimes people are too quick to point to it, but if they have shown it is an issue from the metrics it is worth addressing. We all have had support issues with many vendors, NetApp included...but NetApp escalations always do an exellent job. Work with your NetApp SE or ask for the supervisor if you are not getting good support and they always step up.
Thanks for your answer. I know that misaligned disks may impact somehow performance. However even Netapp documents (don't have it right now, but saw it in some TR) say that performance penalty from misalignment is ~5-7%. So even if I would fix my misalignmkent (that is btw not that big - couple of VMs with veeeery low IO) I wouldn't gain anything from it...
That's most called reason by all parties (not only netapp) - ow you have performance problems - then you fix misalignment first... That's just so lame. But from the other hand take a look into following situation (which I also have):
We have ~30 VMs with Windows 2003 installed, but they have Dynamic disks inside. mbrscan shows them as misaligned, but you can't touch them (as filesystem will be corrupted). The same applies to Linux with LVM partitions - if you touch it with mbralign, data is gone forever.
So my question is - are these VMs aligned or not? Because I did even following thing: I aligned w2k3 with basic disks (it showed aligned with mbrscan), and then just changed basic->dynamic disk and it immediately showed as misaligned. So was it misaligned, or just mbrscan can't read offset of dynamic disk? That also bothers me. But now I am pretty prepared, I don't have that many misaligned VMs, and when moving to new systems (based on fas3160) I will do my best to disallow misaligned VMs on that platform. I know that then Netapp will find 100 other reasons to blame customer (like maybe flowcontrol or other things), but at least I will not hear that "oh your VMs are misaligned"...
so in the end, please tell me - how do you see these pw_partial writes counter? you do perfstat and go through its content? or is there other way to verify that? I would appreciate if you would share your approach when troubleshooting performance.
And one last question - in sysstat under normal conditions, should network kb in and disk kb write be equal? As when let's say 100MB comes from network, then it has to be written to a disk as far as I get it
If you search communities, you will find a lot of information how to check for partial writes. Here is link that was posted in one of threads: http://www.vmadmin.info/2010/07/quantifying-vmdk-misalignment.html. mbrscan is just a hint - at the end of day, it is real IO distribution that counts. It is just that in common case misaligned partition is more likely to cause partial writes.
And unfortunately there is no publicly available tool to automate perfstat analysis, which is yet one more reason to open support case - they hopefully have more effective ways to deal with amount of information collected by perfstat.
Regarding disk vs. net - IMHO disk should match net when averaged across some time interval. Consistent mismatch is one of indications of potential issues.
Regarding basic => dynamic conversion.
When you convert (non-system) basic disk, Windows will create single partition of type 0x42 that covers all available space and create volumes inside this partition that correspond to previously existing partitions; partition table entries for them are removed.
This special 0x42 partition will start immediately after MBR and so appear unaligned. But actual volumes should remain aligned after that.
That is why I said mbrscan is just a hint. Unfortunately, dynamic disks do not offer any possibility to force particular alignment for volumes (at least, I am not aware of any tool to do it). There is really no compelling reasons to use dynamic disks with modern storage arrays and/or in virtual environment.