Remember you'll still need a hot spare available after you've converted the aggregate to RAID-DP. If you only have 1 spare today, you won't be able to do this without buying more disks I'm afraid. Well, you can, but you'll have no hot spares and I'd rather have RAID-4 than no hot spares. A disk rebuild is quicker and more efficient in most scenarios than doing parity calculations on each read. It's a fine line, but I'd stick with RAID-4 unless you can buy more disks.
RAID-DP over RAID-4 has no read or write penalty, arguably RAID-4 only has one parity calculation, but the write cache removes that as a real concern.
Most disk failures are proactively failed, and a disk-to-disk copy has little or no impact on the CPU. So my RAID-4 system will have very little performance hit on a disk failure. I can still suffer 2 failures, although not at the same time.
In a RAID-DP system I have 2 parity disks but no hot spare, so any failure, proactive or otherwise, will cause a parity calculation, and this will have to be done for EVERY read until the failed disk is replaced. When the failed disk is replaced I then have to do an entire parity recalculation as I have no disk to copy valid data from.
Keep in mind that my thinking above is only valid for a small 2000 series with 12 disks where the RAID group sizes are also small. I'd vote RAID-DP every single time on a larger system without fail.
Because I have seen reconstruction failures caused by latent sector failures, and my priority is data availability above data access speed ☺ Such failures are probably less likely on NetApp due to the way data is placed on disks … but old habits die hard ☺
Proactive disk sparing requires 2 spares so it is out of question here anyway.
When I say proactive disk sparing, I mean when a disk is failed due to the quantity of bit level or software errors, rather than catastrophic disk failure. This doesn't require 2 disks, as far as I remember, 2 disks are only required for the maintenance garage which will run low level checks and reformat disks and bring them back into production based on software errors. But I could be completely wrong there