With regular RAID6, the parity information is striped over all disks in the RAID set, just like with RAID5.
With RAID-DP, the parity information is kept on dedicated drives, one for the "normal" parity information, and one for the "diagonal" parity information. Concept-wise it works just like RAID4, except there is an additional parity drive for the diagonal parity. The similarity in concept is why Ontap supports both RAID4 and RAID-DP, but not RAID5, btw.
Now, on non-Ontap systems, dedicated parity drives are a disadvantage because they tend to become the hotspot of the RAID set because they have to be written to for every single write that occurs, no matter how small those writes may be.
Ontap (or the WAFL filesystem, specifically) solves this by trying to make sure that writes occur across the whole stripe as much as possible. When you write the whole stripe at once, the parity drive only gets written to once as well and does not cause a bottleneck.
This only works when WAFL has enough free stripes to work with, though. This is the reason for the recommendation to not fill your aggregates past 80% or 90% or so.
The advantage of RAID-DP compared to regular RAID6 is that growing your RAID set is just a matter of adding drives, whereas with RAID6 you'd have to re-distribute all the data over the newly grown RAID, which takes ages, is error-prone and makes performance suffer.