We are observing the same problem on two systems after an upgrade from 8.3.2P? to 9.1P11.
In our situation, after digging a little deeper into the problem, it looks like all spare disks on the cluster are having the <is-media-scrubbing> property set like this:
<disk-raid-info>
<active-node-name>stgb-pidpa-n01</active-node-name>
<container-type>spare</container-type>
<disk-shared-info>
<is-sparecore>false</is-sparecore>
</disk-shared-info>
<disk-spare-info>
<is-media-scrubbing>true</is-media-scrubbing>
<is-offline>false</is-offline>
<is-sparecore>false</is-sparecore>
<is-zeroed>true</is-zeroed>
<is-zeroing>false</is-zeroing>
</disk-spare-info>
<disk-uid>5002538A:07219DE0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000</disk-uid>
<effective-disk-type>SSD</effective-disk-type>
<physical-blocks>97677846</physical-blocks>
<position>present</position>
<spare-pool>Pool0</spare-pool>
<used-blocks>97613824</used-blocks>
</disk-raid-info>
and some data disks are reporting media scrubbing like this:
<disk-raid-info>
<active-node-name>stgb-pidpa-n01</active-node-name>
<container-type>shared</container-type>
<disk-aggregate-info>
<checksum-type>none</checksum-type>
<copy-percent-complete>0</copy-percent-complete>
<is-media-scrubbing>true</is-media-scrubbing>
<is-offline>false</is-offline>
<is-prefailed>false</is-prefailed>
<is-reconstructing>false</is-reconstructing>
<is-replacing>false</is-replacing>
<is-zeroed>true</is-zeroed>
<is-zeroing>false</is-zeroing>
<reconstruct-percent-complete>0</reconstruct-percent-complete>
</disk-aggregate-info>
<disk-shared-info>
<aggregate-list>
<shared-aggregate-info>
<aggregate-name>n01_sas_450g_01_hybrid</aggregate-name>
</shared-aggregate-info>
<shared-aggregate-info>
<aggregate-name>n02_sas_450g_01_hybrid</aggregate-name>
</shared-aggregate-info>
</aggregate-list>
<checksum-type>none</checksum-type>
<copy-percent-complete>0</copy-percent-complete>
<is-media-scrubbing>true</is-media-scrubbing>
<is-offline>false</is-offline>
<is-prefailed>false</is-prefailed>
<is-reconstructing>false</is-reconstructing>
<is-replacing>false</is-replacing>
<is-sparecore>false</is-sparecore>
<is-zeroed>true</is-zeroed>
<is-zeroing>false</is-zeroing>
<partitioning-type>storage_pool</partitioning-type>
<reconstruct-percent-complete>0</reconstruct-percent-complete>
<storage-pool>sp1</storage-pool>
</disk-shared-info>
<disk-spare-info>
<is-sparecore>false</is-sparecore>
</disk-spare-info>
<disk-uid>5002538A:07219DD0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000</disk-uid>
<effective-disk-type>SSD</effective-disk-type>
<physical-blocks>97677846</physical-blocks>
<position>shared</position>
<spare-pool>3f195998-6159-11e7-83a4-00a098647eeb</spare-pool>
<used-blocks>97613824</used-blocks>
</disk-raid-info>
Now, all those disks, in our situation, are ADP partitioned disks that have at least one partition that is not used and in fact uased as spare, eg.
Pool0 spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block checksum
spare 3a.40.23 3a 40 23 SA:A 0 SAS 10000 857000/1755136000 858483/1758174768
spare 3d.43.23 3d 43 23 SA:B 0 SAS 10000 857000/1755136000 858483/1758174768
spare 3d.44.9 3d 44 9 SA:B 0 SAS 10000 857000/1755136000 858483/1758174768
spare 3d.44.23 3d 44 23 SA:B 0 SAS 10000 857000/1755136000 858483/1758174768
spare 0a.03.7 0a 3 7 SA:A 0 SAS 15000 418000/856064000 420156/860480768
spare 1b.04.17 1b 4 17 SA:B 0 SAS 15000 418000/856064000 420584/861357448
spare 1b.04.23 1b 4 23 SA:B 0 SAS 15000 418000/856064000 420584/861357448
spare 1b.05.23 1b 5 23 SA:B 0 SAS 15000 418000/856064000 420584/861357448
spare 1b.06.23 1b 6 23 SA:B 0 SAS 15000 418000/856064000 420584/861357448
spare 0d.30.0P1 0d 30 0 SA:B 0 SSD N/A 95312/195200512 95320/195216896
spare 0d.30.2P1 0d 30 2 SA:B 0 SSD N/A 95312/195200512 95320/195216896
spare 0d.30.4P1 0d 30 4 SA:B 0 SSD N/A 95312/195200512 95320/195216896
spare 0d.30.6P1 0d 30 6 SA:B 0 SSD N/A 95312/195200512 95320/195216896
spare 0d.30.8P1 0d 30 8 SA:B 0 SSD N/A 95312/195200512 95320/195216896
spare 0d.30.10P1 0d 30 10 SA:B 0 SSD N/A 95312/195200512 95320/195216896
spare 1c.30.1P1 1c 30 1 SA:A 0 SSD N/A 95312/195200512 95320/195216896
spare 1c.30.3P1 1c 30 3 SA:A 0 SSD N/A 95312/195200512 95320/195216896
spare 1c.30.5P1 1c 30 5 SA:A 0 SSD N/A 95312/195200512 95320/195216896
spare 1c.30.7P1 1c 30 7 SA:A 0 SSD N/A 95312/195200512 95320/195216896
spare 1c.30.9P1 1c 30 9 SA:A 0 SSD N/A 95312/195200512 95320/195216896
spare 1c.30.11 1c 30 11 SA:A 0 SSD N/A 381304/780910592 381554/781422768
All those disks are "scrubbing" even the disks in shelf 30 whose partition 2 is actively in use.
We have a case open with support (2007380388) to look a little deeper into this. I will report back as soon as something comes out of this.
Best regards!