2006-12-12T19:00:29 Mike Sphar:
Also feel free to tell me how using AFS is crazy in general and I should just accept my stale file handles.
Using AFS is crazy in general and you should just accept your stale file handles.
It's possible to make constructive use of AFS, but seriously, beware, it's not a POSIX compatible filesystem.
You can't have hard links between different directories.
AFS makes extensive use of volume mounts, which don't act quite like Unix mounts; nlinks for dirs are off on them, so you have to use find --noleaf or it'll fail to traverse them. Volume mounts are weirdly different from unix filesystem mounts; they're actually implemented as symlinks -> ' #volnum', you can create a volume mount with ln -s but can't remove it with rm as stat reports that a volume mount is a genuine dir. rmdir won't delete it either. Gotta use fs rmmount.
dev/ino is composed by scooshing 64 bits, an internal 32-bit volume number and a 32-bit internal file node number, into a 32-bit inum, all the AFS volumes in an install use the same dev. So dev+ino is not unique.
The semantics of ownership and perms are very, very different; there's a whole new user and group id space (pts), ACLs are applied to directories and influence everything below them (you can't set ACLs on files, only dirs), and most of the unix perms (owner/group/world * rwx) are ignored.
In the interest of efficiently maintaining distributed cache coherence, AFS actually only writes the data on close --- and then only if your tokens are still valid. A common experience when first learning AFS is to have some long-running process lose all its output by outliving its tokens.
There are some big upsides, too, if you can live with all the craziness.
You can set up seriously high-availability readonly AFS.
AFS uses Kerberos authentication plus its internal ACL system, you get a better grade of security.
AFS's caching can make it useable with limited bandwidth or poor latency, and can boost the amount of client traffic a given server can handle.
-Bennett