The split can also be to two data partitions (depends on the disk size and type) this is to allow a better split between nodes.
you can see the exact configuration it will create per disk type and count in HWU https://hwu.netapp.com
(select platform > FAS > model > OS > and when the results displayed look for the line "ADP Root Partition Configuration"
there are partitioned spares - and non partitioned spares in the system. you can see the configuration in the table of HWU. in general the system will try to always keep that number of spare partitioned drives, by partitioning a non-partitioned spare disk.
New disks added to the selves are assigned to a node just show as normal spares - they only try to partition under the following terms:
seems that if you add a disk to a RAID group with partitioned drives, it will try to partition that drive as well
i'm not aware of wide limitation. it may force the RAID configuration to look a bit different then you planned (like if your system can only have 24 partitioned disks - you being forced to have something like 11 or 22 disks in RAID group, compare to an old standard of let's say - 16)