Filesystem layout
We have about 5000 user home directories and several (maybe 10) other logical areas that we will be migrating. Total disk space used initially will be small, about 35G. My guess is that it would be best to keep user directories as a logically separated area (from, say, Alpha binaries) because of differences in default quotas and snapshot and backup policies. Does this sound reasonable, or could it be best to lump them together?
In general, bigger volumes are easier to manage, because you have fewer of them.
Q-trees are a good tool for managing space within a single volume. You can assign different default quotas in different q-trees, and many people also do backup on a q-tree basis.
(Q-tree used to be called "quota trees", since they let you assign a quota to a top-level subtree in the filesystem, but now we've added other functionality as well, such as control for UNIX versus NT style file security, so we just call them q-trees.)
However, snapshot schedules do apply on a per-volume basis, so to keep things in a single volume you will need to come up with a snapshot schedule that satisfies all users.
Large directory constraints
Also, for the user areas, what are the practical constraints on the number of files in a directory? I am most used to AdvFS, under which large directories can be slow to open and read, if a user tries to do something like tab completion on a filename in a directory of 1000 entries.
Will I see the same kind of behavior with WAFL, and if so, does anyone have a good suggestion on breaking up the user space? We were thinking of using directory structure organization (nothing at logical partition level) for this.
We did quite a bit of performance work on large directories back when NetCom still had all of their users' mailboxes in one large /usr/spool/mail. At 10K users (this was a *long* time ago) it started getting slow, and at 30K names, it really sucked. Somewhere between 30K and 100K users, we went in and re-worked our directory code to use a hashing scheme that's much more efficient, although NetCome (and other large ISPs) eventually moved away from the super-giant mail spool directory.
There are definitely performance issues with super-giant directories, but I don't think of 1K or even 10K as super giant. On the other hand, applications that *sort* all the names in a directory as they display it (like "ls" for instance, or "echo *") can get slow for reasons that have nothing to do with the file server itself. You might want to experiment.
Dave