A few thoughts with regards to your last post.
First - one thing to keep in mind is that for "traditional" processing, despite handling everything well during "the day" the biggest overall system stress creater is a backup cycle. Backups can kill perfectly good architectures simply due to everything processing data at once at massive I/O rates compared to regular processing. Very common to push things too far. So the fact that backups are stressing your environment is nothing new. Everybody runs into it eventually.
So back to the issue at hand. Without having the detail performance data that you can get at the moment OCUM/OPM declare an event and working from just the event descriptions, yes, it sounds like you have a node processing capability issue. Thus - on the surface - moving data to a new aggregate on the same node may not make a big difference. The same node is still processing all the data.
OPM is pretty good at detecting when issues are due to aggregates or nodes or network or any of the other processing elements in the entire data flow chain. You'll note that one event called out an aggregate in particular, whereas others called out the node. That *could* mean a couple of things - the aggregate called out might be slowing the node down for other processing and/or the node may simply be unable to keep up despite the heavy load on a single aggregate.
In a traditional "fix-it" mindset - first do you have an aggregate on a partner node to which you could move part of the data in question (even if SATA disk is only on the one node)? As opposed to just moving to a different aggregate on the same node that is. Both options can be tried, but with the limited information available I can't advise whether you will just move the performance issue or actually see improvement. I expect with the information at hand that if, based on total I/O, a performance (SAS) disk aggregate can't handle load and the node is bogged down because of it, moving some of that load to a capacity (SATA) tier on the same node is not likely to make things better and will likely make things worse. Worth a shot perhaps, but not expecting any miracles here.
So - now the non-traditional fixes. The backups appear to be Oracle based backups, based on the node names. I am going to assume that your backups are RMAN based - can you use the RMAN commands in the backup job to limit the bandwidth available to the backup device so as to artifically lower the backup load (the RATE parameter on the device configuration if I remember my Oracle correctly - it's been a while since I worked with Oracle DBs). By default RMAN tries to go as fast as it possibly can - some artificial limits will increase your backup time but may also lower total I/O load to and from your target volumes.
Then again, can you use Snapshots to handle your Oracle backups instead, either through manual means or through SnapManager for Oracle integration into RMAN? Rather than stream traditional backup data to a target volume, Snapshot based backup and Oracle integration products are part of the total value proposition of a NetApp solution.
[ soapbox mode on ]
In my opinion, if you don't leverage them everywhere you can, especially for situations like this, why spend the extra money on NetApp as opposed to just a dumb bunch of disks?
[ soapbox mode off ]
And lastly, assuming the systems are under maintenance, what are your account engineers and NetApp support teams suggesting with regards to this issue?