One of the main advantages of the filer is that it supports directory hashing to speeds lookups greatly. This has been a feature since 2.1; there's a white paper on the web site that talks about it. You are definitely better off having 30,000 files in one directory on a filer than having it on a local UFS disk...
Some rules of thumb to follow:
- Configure your clients so all email is delivered from one central mailhost (which then writes to the mail spool over NFS). The other clients will read from it and re-write their own mailboxes, but delivery by the local agent (usually /bin/mail) is best left to only one machine.
We're about to implement a system which uses several central mail servers, all sharing access to mailboxes. How have people handled this? The two options I've come up with so far are to use procmail as the local agent or to install qmail which uses a mail directory containing a single file per message.
The first approach isn't too much different from what we're doing at the moment on a single server, but I'm somewhat wary of trusting any sort of locking across NFS. The second involves migrating from sendmail to qmail, which I'm not overly keen to do. It would be straightforward enough if all of our end-user email access was restricted to POP3 (qmail includes its own POP3 server), but we've a considerable number of IMAP users to worry about...
Are there any other alternatives?