Are you wrong? Or is Network Appliances wrong?
Dunno. :-) However, I have noticed problems with large directories on our system, which was causing very poor performance. One person mentioned that it might have been my use of nfsv3, instead of v2, which alledgedly has a slower "readdir" call...
No, NFSv3 has the same READDIR call as NFSv2. The problem is that it also has a READDIR+ call, which not only does what READDIR does but, in addition to a bunch of filenames, returns result of GETATTR on each of the objects.
That's great for "ls -l" which does a stat() (generating a GETATTR in NFSv2) on each file.
That's not so great for, e.g., netnews, which often lists the contents of a directory solely so it can find which articles really exist. It doesn't care about anything other than the name, but most NFSv3 clients can't figure that out and use READDIR+ anyway, uselessly going out to each inode (potentially a separate disk read for each one) to collect information that won't be used.
This has nothing to do with the size of the directory, except for the fact that you have more files to look at. If you looked at the same number of files spread over a bunch of directories with a few dozen files each instead of one huge directory, the problem would be the same.
-- Karl Swartz - Technical Marketing Engineer Network Appliance kls@netapp.com (W) kls@chicago.com (H)