This can be crazy! You will end up with small raid groups eating up parity to sacrifice for what? Sacrificing space for what?
Now, given a large environment (like 10 shelves or more)...maybe you can start with this. I did this for a customer once.
...ONCE
We ended up with 16 or 18 disk raidgroups and there was no more than 2 per raidgroup per shelf.
We took this one a bit farther too....all even numbered disks (*0,*2,*4,*6,*8) were assigned to node 2. The rest to node 1
When a disk fails, assign even to node 2, odd to node 1
This made the aggregates a bit trickier to place, but it happened.
Now, when a disk fails, I cannot control where it rebuilds other than a spare. I tried to keep the spares on one shelf thinking in the event of failures, they will likley be different raidgroups.
However, one could script some monitoring software to wathc where the spares are and watch for more than 2 disks in a raidgroup showing in the same shelf. Then possibly forcing the running of a "disk copy start" command to nondisruptively move the disk. THIS TAKES LONGER than a reconstruction!!! Why? the process is NICE'd to use limited resources because it it not critical yet.