What do you mean about "100,000 files in a directory will cause you performance problems"?
I was told by Network Appliance that their WAFFLE filesystem
WAFL - Write Anywhere File Layout
(or whatever) hashes the directory entries internally, so there are *no* scaling issues with regards to directories with many entries in them.
Are you wrong? Or is Network Appliances wrong?
Neither is entirely correct. TR-30016 - Accelerated Performance for Large Directories (http://www.netapp.com/technology/level3/3006.html) explains what we do. Populating a directory with n files is an O(n^2) operation for a normal Unix file system; for a filer it's roughly O((n/2048)^2). Technically, that's still O(n^2), and thus it's not true that there are *no* scaling issues, but it's not a huge problem until you get really big, significantly bigger than 100,000 files.
As noted in another message, there is an issue which affects very large directories which are not currently in memory. This becomes most apparent at around 8MB or larger. I believe a fix in currently in test; contact NetApp Tech Support for details if this is affecting you.
-- Karl Swartz - Technical Marketing Engineer Network Appliance kls@netapp.com (W) kls@chicago.com (H)