We haven't narrowed down the problem entirely yet, but I'd thought I'd ask anyway....
During a linking stage, in linking a bunch of .o files into a library .a, the linker will grab about 2/3 of the files in a directory into the archive, and that's it. A subsequent `rm` of *.o sometimes removes only those exact 2/3 files, sometimes removes all of them, even though the archive only has 2/3 of them in it.
It seems as if the directory inode is only showing the oldest 2/3 of the files, and depending on how long it takes the `ar` and `rm` commands to execute, removes 2/3 or all of the files.
I don't suspect it's the filer; watching an `ls` of the directory from another machine (Solaris) shows all the files there. There are only about 150 .o files to archive.
Anyone aware of any client-side NFS caching problems with Alpha or HP700 boxes? The HP is running HPUX10.20 and appears to be using only NFS V2; the alpha is running Digital UNIX V4.0D and appears to be using only V3. (NFS version from `nfsstat -h hostname` on the filer.) The filer is set nfs.tcp.enable off, NetApp Release 5.2.1.
Until next time...
Todd C. Merrill The Mathworks, Inc. 508-647-7792 24 Prime Park Way, Natick, MA 01760-1500 508-647-7012 FAX tmerrill@mathworks.com http://www.mathworks.com ---
There is a known bug in Digital UNIX that is probably causing your problem.
If the DU client reads a directory, and the directory is extended, on a subsequent read the client may not issue the readdir/readdirplus requests needed pick up the new entries.
An engineer at DEC (Compaq) provided me with the bug number - QAR 55783 - and told me that it has made it into their patch pool.
I don't know if the same bug exists in HPUX 10.20.
Rajesh
We haven't narrowed down the problem entirely yet, but I'd thought I'd ask anyway....
During a linking stage, in linking a bunch of .o files into a library .a, the linker will grab about 2/3 of the files in a directory into the archive, and that's it. A subsequent `rm` of *.o sometimes removes only those exact 2/3 files, sometimes removes all of them, even though the archive only has 2/3 of them in it.
It seems as if the directory inode is only showing the oldest 2/3 of the files, and depending on how long it takes the `ar` and `rm` commands to execute, removes 2/3 or all of the files.
I don't suspect it's the filer; watching an `ls` of the directory from another machine (Solaris) shows all the files there. There are only about 150 .o files to archive.
Anyone aware of any client-side NFS caching problems with Alpha or HP700 boxes? The HP is running HPUX10.20 and appears to be using only NFS V2; the alpha is running Digital UNIX V4.0D and appears to be using only V3. (NFS version from `nfsstat -h hostname` on the filer.) The filer is set nfs.tcp.enable off, NetApp Release 5.2.1.
Until next time...
Todd C. Merrill The Mathworks, Inc. 508-647-7792 24 Prime Park Way, Natick, MA 01760-1500 508-647-7012 FAX tmerrill@mathworks.com http://www.mathworks.com
Anyone aware of any client-side NFS caching problems with Alpha or HP700 boxes?
We have seen quite a similar problem, about year and a half back, with our HP nfs clients (and I believe the Digitals too). The symptom was, a prog creats a new file, closes it and tries to open() the very same file; but fails "no such file.." When this happens I can log on to a different HP nfs client and the new file is there indeed. I used the brute force approach. Added "-noac" to all my NFS mounts! Never checked to see if any of the 100+ HP-UX 10.2 patches installed on the system since then fixed it or not.
I am open for any other "proper" solution(s).
-- Begin original message --
From: "Todd C. Merrill" tmerrill@mathworks.com Date: Tue, 16 Mar 1999 17:42:45 -0500 (EST) Subject: filer NFS to hp700 or alpha: client cache problem? To: toasters@mathworks.com Reply-To: "Todd C. Merrill" tmerrill@mathworks.com
We haven't narrowed down the problem entirely yet, but I'd thought I'd ask anyway....
During a linking stage, in linking a bunch of .o files into a library .a, the linker will grab about 2/3 of the files in a directory into the archive, and that's it. A subsequent `rm` of *.o sometimes removes only those exact 2/3 files, sometimes removes all of them, even though the archive only has 2/3 of them in it.
It seems as if the directory inode is only showing the oldest 2/3 of the files, and depending on how long it takes the `ar` and `rm` commands to execute, removes 2/3 or all of the files.
I don't suspect it's the filer; watching an `ls` of the directory from another machine (Solaris) shows all the files there. There are only about 150 .o files to archive.
Anyone aware of any client-side NFS caching problems with Alpha or HP700 boxes? The HP is running HPUX10.20 and appears to be using only NFS V2; the alpha is running Digital UNIX V4.0D and appears to be using only V3. (NFS version from `nfsstat -h hostname` on the filer.) The filer is set nfs.tcp.enable off, NetApp Release 5.2.1.
Until next time...
Todd C. Merrill The Mathworks, Inc. 508-647-7792 24 Prime Park Way, Natick, MA 01760-1500 508-647-7012 FAX tmerrill@mathworks.com http://www.mathworks.com
-- End original message --
---philip thomas
Philip Thomas wrote:
Anyone aware of any client-side NFS caching problems with Alpha or HP700 boxes?
We have seen quite a similar problem, about year and a half back, with our HP nfs clients (and I believe the Digitals too). The symptom was, a prog creats a new file, closes it and tries to open() the very same file; but fails "no such file.." When this happens I can log on to a different HP nfs client and the new file is there indeed.
We had the same problem. There were HP patches that fixed it, but our standard load contains so many patches now that i'm afraid to say that i cannot recall which ones fixed the problem. A call to HP tech support should get you the correct patch list.
Graham