I think it might, as far as I know filers still use inodes, and it seems that the nested levels would be what would cause the problems here...but since they are going to increase the number of blocks then can address at the ^2 of the number of inodes deep I'm surprised this would cause performance issues. Adam probably can give you a 100% answer.
Just like traditional UNIX filesystems, I'm pretty sure WAFL has no directory index. So you have a linear search through a directory for access.
This is the reason you have a maxdirsize, so that there is a limit to the number of entries you can put in a directory and to the amount of searching you do in a directory.
The guide discusses this.
maxdirsize number
Sets the maximum size (in KB) to which a directory can grow. This is set to 1% of the total system memory by default. Most users should not need to change this setting. If this setting is changed to be above the default size, a notice message will be printed to the console explaining that this may impact performance. This option is useful for environments in which system users may grow a directory to a size that starts impacting system performance. When a user tries to create a file in a directory that is at the limit, the system returns a ENOSPC error and fails the create.
Note that the default should be able to hold considerably more than the 20K entries of the OP. I'd rather not design a process that required such an architecture, but that many in a directory wouldn't worry me very much unless the filer were CPU stressed to begin with.