hi all, i notice that on my NetApp F740 there are a lot of file named .nfs***, for example .nfs0048c2f20000043a . most are empty, but there are someone very big, and in some particular case, i can't delete it, without moving the file subdirectory by subdirectory since i arrive on the unix phisical disk.
What about these files ? :)
--- Fabio --- System Administrator --- NaiF @ --- Pietrosanti --- fabio@telemail.it --- IrcNet .--------------------------------------------------- | Matrice s.r.l. Tel +39 02 67382595 | Via Copernico, 8 Fax +39 02 6700894 | Milano http://www.matrice.it | -------------------------------------------------- | PGP Key (DSS) on http://naif.itapac.net/naif.asc .---------------------------------------------------
"Fabio" == Fabio Pietrosanti fabio@telemail.it writes:
Fabio> hi all, Fabio> i notice that on my NetApp F740 there are a lot Fabio> of file named .nfs***, for example .nfs0048c2f20000043a . Fabio> most are empty, but there are someone very big, and in some Fabio> particular case, i can't delete it, without moving the file Fabio> subdirectory by subdirectory since i arrive on the unix phisical disk.
Fabio> What about these files ? :)
.nfs files are created by a clienthost when one process on the clienthost deletes a file while another process on the clienthost is still holding the file open. This allows the delete to appear to succeed for one process w/o causing the the process to begin getting stale nfs file handles. It is a hack, but it is the only way to simulate UFS semantics on NFS. The clienthost will normally delete the .nfs file once the remaining process holding the file open closes it. However, if the clienthost crashes, you get left with a .nfs file on the filer.
Note that if more than one host is involved (e.g, process on host a is holding a file open over NFS, while process on host b deletes that over NFS), process a will get a stale file handle.
j.
A good thing to do is set up a weekly cron job to do a find on the filer and delete any .nfs* files over a week old. Would be nice if this was integrated into the filer's own cron facility, though.
Bruce
On Thu, 20 Apr 2000, Jay Soffian wrote:
"Fabio" == Fabio Pietrosanti fabio@telemail.it writes:
Fabio> hi all, Fabio> i notice that on my NetApp F740 there are a lot Fabio> of file named .nfs***, for example .nfs0048c2f20000043a . Fabio> most are empty, but there are someone very big, and in some Fabio> particular case, i can't delete it, without moving the file Fabio> subdirectory by subdirectory since i arrive on the unix phisical disk. Fabio> What about these files ? :)
.nfs files are created by a clienthost when one process on the clienthost deletes a file while another process on the clienthost is still holding the file open. This allows the delete to appear to succeed for one process w/o causing the the process to begin getting stale nfs file handles. It is a hack, but it is the only way to simulate UFS semantics on NFS. The clienthost will normally delete the .nfs file once the remaining process holding the file open closes it. However, if the clienthost crashes, you get left with a .nfs file on the filer.
Note that if more than one host is involved (e.g, process on host a is holding a file open over NFS, while process on host b deletes that over NFS), process a will get a stale file handle.
j.
have you tried a "fuser" command on the files to see what processes are using the file?
----------- Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO
rrjl@stl.nexen.com (Jay Orr) writes:
have you tried a "fuser" command on the files to see what processes are using the file?
That's certainly the thing to do if unlinking the .nfs* file just causes it to pop up under another .nfs* alias, as in that case you can be sure your client kernel thinks the file is open. [And assuming you have an fuser command or equivalent in your OS, of course: otherwise try installing lsof.]
sirbruce@ix.netcom.com (Bruce Sterling Woodcock) writes:
A good thing to do is set up a weekly cron job to do a find on the filer and delete any .nfs* files over a week old. Would be nice if this was integrated into the filer's own cron facility, though.
Something along the lines of Solaris's /usr/lib/fs/nfs/nfsfind, in other words. Unfortunately this is quite contentious, because one can take issue with
1. The time interval: why one week rather than one hour or one year? It depends on what one expects the clients to be doing.
2. What sort of time? Solaris uses -mtime but I could easily argue for -atime or -ctime or some combination of them all.
3. The test on the names. There's nothing in any of the NFS standards that mandates this behaviour, let alone the nature of the names to be used. As has been pointed out already, it's a fudge, and although .nfs* will cover most client implementations more or less descended from the original Sun one, it's unfortunately wide.
In case you think that this is being hypercritical: a few years ago one of our users was mystified by the way that his file .nfs_domain kept disappearing, when it was "essential for the functioning of my environment" as he told us...
The security problems associated with root-privileged programs traversing directory trees that can simultaneously be modified by users with evil intentions should also be kept in mind.
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.
In case you think that this is being hypercritical: a few years ago one of our users was mystified by the way that his file .nfs_domain kept disappearing, when it was "essential for the functioning of my
environment"
as he told us...
And you'd point him to the new user information booklet, which clearly covered this behavior, as well as the deletion of old core files that weren't renamed, etc. :)
You have a good point, though... whatever the parameters are for the find have to be tuned to your particular environment.
Bruce