Just throwing a question out there curious to hear people's thoughts or experiences. Every time I end up dealing with hundreds of stale file handles because of a server move/change I become increasingly annoyed by the stateless nature of NFS and think to myself "Maybe this time I'll finally start seriously looking at AFS."
Other than lots of other ways an AFS deployment could be complicated, I wonder how if at all a Netapp can be part of an AFS deployment?
Also feel free to tell me how using AFS is crazy in general and I should just accept my stale file handles.
Mike,
Just throwing a question out there curious to hear people's thoughts or experiences. Every time I end up dealing with hundreds of stale file handles because of a server move/change I become increasingly annoyed by the stateless nature of NFS and think to myself "Maybe this time I'll finally start seriously looking at AFS."
Other than lots of other ways an AFS deployment could be complicated, I wonder how if at all a Netapp can be part of an AFS deployment?
Also feel free to tell me how using AFS is crazy in general and I should just accept my stale file handles.
NetApps have been successfully used as back-ends over iSCSI (I think) to AFS server front-ends. A problem that has arisen is the unreliability of the front-ends in comparison to Data OnTap, however, which is a common reason folks go to NetApp, in my experience.
NetApp merged with (or acquired) Spinnaker a while back. I believe many of Spinnaker's developers were from the AFS arena. I've heard something like the benefits of AFS will bleed through in future products. I don't recall further details.
In general, I find AFS a little clunky due to the lack of experience with its commands (both personally and across the user community) and perhaps more importantly the fact that it's an add-on kernel module which OS kernel developers do not hold dearly to heart during ABI changes. Changes to both Solaris and Linux in the past have caused situations to be dealt with, which would not have needed to have been dealt with, if the filesystem were an integral part of the environment that went through the proper testing phases. Neither of these "problems" are really AFS' fault, but that doesn't make them go away, either...
Roy
If you're tired of the stale file handles, why not virtualize your storage behind an Acopia switch?
Shawn
On 12/12/06 1:00 PM, "Sphar, Mike" Mike_Sphar@bmc.com wrote:
Just throwing a question out there curious to hear people's thoughts or experiences. Every time I end up dealing with hundreds of stale file handles because of a server move/change I become increasingly annoyed by the stateless nature of NFS and think to myself "Maybe this time I'll finally start seriously looking at AFS."
Other than lots of other ways an AFS deployment could be complicated, I wonder how if at all a Netapp can be part of an AFS deployment?
Also feel free to tell me how using AFS is crazy in general and I should just accept my stale file handles.
2006-12-12T19:00:29 Mike Sphar:
Also feel free to tell me how using AFS is crazy in general and I should just accept my stale file handles.
Using AFS is crazy in general and you should just accept your stale file handles.
It's possible to make constructive use of AFS, but seriously, beware, it's not a POSIX compatible filesystem.
You can't have hard links between different directories.
AFS makes extensive use of volume mounts, which don't act quite like Unix mounts; nlinks for dirs are off on them, so you have to use find --noleaf or it'll fail to traverse them. Volume mounts are weirdly different from unix filesystem mounts; they're actually implemented as symlinks -> ' #volnum', you can create a volume mount with ln -s but can't remove it with rm as stat reports that a volume mount is a genuine dir. rmdir won't delete it either. Gotta use fs rmmount.
dev/ino is composed by scooshing 64 bits, an internal 32-bit volume number and a 32-bit internal file node number, into a 32-bit inum, all the AFS volumes in an install use the same dev. So dev+ino is not unique.
The semantics of ownership and perms are very, very different; there's a whole new user and group id space (pts), ACLs are applied to directories and influence everything below them (you can't set ACLs on files, only dirs), and most of the unix perms (owner/group/world * rwx) are ignored.
In the interest of efficiently maintaining distributed cache coherence, AFS actually only writes the data on close --- and then only if your tokens are still valid. A common experience when first learning AFS is to have some long-running process lose all its output by outliving its tokens.
There are some big upsides, too, if you can live with all the craziness.
You can set up seriously high-availability readonly AFS.
AFS uses Kerberos authentication plus its internal ACL system, you get a better grade of security.
AFS's caching can make it useable with limited bandwidth or poor latency, and can boost the amount of client traffic a given server can handle.
-Bennett