Another option that might work is to disable the auto firmware update check. The command to do that is "optionsacp.fwAutoUpdateEnabled off" This gives you the benefit of having ACP, while potentially fixing this bug.
I currently have an open case on this. My 6240 controllers are seeing higher Read/Write/Other operations than the production VMware and Oracle volumes. We're looking at the ACP options and possibly disabling that.
Have you had any success with this? - we are seeing the same issue on a 6210 cluster - vol0 has upwards of 1000 iops at times and is consistently either the top vol, or in the top5. Considering our vol0's are alone on a 3 disk aggr. this isnt possible.
My open case has resulted in us checking into turning off the ACP (options acp.enabled off). I am going to do that later today on a 6280 cluster hosting my TSM db/logs/storage pools. I'll watch the IOPs via Performance Adviser and post my findings here.
Unknown at this time. Support hasn't said it is a Bug; disabling ACP was a test to see if the IO would in fact drop. Technically there shouldn't be any impact to leaving it disabled (that I have been able to find so far).
After setting acp.enabled to "off" on three 6200 clusters I have recorded a significant drop in /vol0 IO across all six controllers. I have also noticed a "leveling" of the overall CPU. So far Support has not found any reason not to re-enable ACP (long term issues, etc.). I have had a 3160 running with no issues for almost 2 years without the ACP connected. I have one more system, primary VMware/NFS/iSCSI 6240 array which promoted my quest for a solution to the high IO, left to do (scheduled for Sunday morning).
I assume that the peaks are write ops. If that is the case then I think this could be caused by ontap writing the hourly performance statistics to the /etc/log/cm_stats_hourly file. I see an hourly write peak on all the root volumes.
Still, 1500 ops is huge for just writing performance statistics data, but it is the only hourly action I can think of.