Hi, we have over 20 webservers running a web application and serving all the data off a Netapp NFS mount. Netapp CPU normally sits at around 20-40% and NFS response is good.
Today we hit what appears to have been a bug of some sort. All of a sudden, with no apparent increase in client connections, the F760 CPU went to 80-90% and the load on all the webservers rose sharply (from approx 0.5-1.5 up to 15-20). The site response went down the drain (20+ seconds for a page that normally takes under 1 second).
It appeared to be caused by the application doing readdirs (along with other operations: read/write/getattr) on a specific directory, which at that time held about 70,000 files. We fixed the problem by disabling the readdirs within the application and also reducing the number of files in that directory down to about 45,000.
We don't know exactly which fix (stopping readdirs or removing the files) did the trick, but after that Netapp CPU dropped back to normal and the webservers were happy and the site responsive again.
It appeared that the combination of all the operations on the large directory were causing the NFS clients to hang and the Netapp CPU to max out. Although at the same time, other NFS operations were still performing at a reasonable speed (ie, the whole Netapp was not locked up).
Any ideas on a bug or limitation on either the Linux or the Netapp side with regards to large (70,000+ files) directories?
Info: Netapp F760C ONTAP 6.1.2R3
Linux 2.4.20 (Redhat) Mount options: rw,hard,intr,vers=3,proto=tcp,rsize=32768,wsize=32768
Cheers, Chris