I have to deal with millions of objects in filesystems, I highly recommend subdirectories. look at your nfs_hist output. First do nfs_hist -z. Then count to 30 and run nfs_hist again. It's a histogram of all nfs ops, and how long they took in milisecond buckets. I'd bet lookup is taking a very long time. When dealing with a large number of objects, sensible directory structures are key.
-Blake
On 10/22/07, Adrian Ulrich toaster@blinkenlights.ch wrote:
I had 25Tb of data and many directories of 100 000 files, when i try "ls -all" it's too long ..... I'd enough time to take a coffee :)
Just a hint:
'ls' sorts it's output, so you won't see anything until the 'getdents-loop' finished.
You could try using 'find', it should be a lot faster than ls (because it doesn't sort) but it will still take some time to complete.
Regards, Adrian