Thanks to everyone for the help on this.
Here is what we have found to date with large file support and Linux.
* you must be running RHat Kernel 2.4.2 (this includes large FS support in ext2) * you must have an application that supports large FS's (as per Aaron's idea we modified the gzip.c header and recompiled gunzip) * you need to be mounting with NFS v3 for > 2GB
Thanks to all for the help again..
-----Original Message----- From: Chris Thompson [mailto:cet1@cus.cam.ac.uk] Sent: Thursday, March 29, 2001 3:37 AM To: toasters@mathworks.com Cc: dhubbard@websense.com; Aaron.Sims@netapp.com; crandall@matchlogic.com Subject: Re: File too large ?
Just concentrating on the Solaris parts of this thread...
Dan Hubbard dhubbard@websense.com wrote:
We are using a ONTAP v 6.0.1 and are NFS mounting Solaris and Redhat Linux boxes. We are in the process of creating some large text files and cannot either gunzip and/or create files greater than 2GB's. We have bumped up
our
maxfiles and set the options nfs.v2.df_2gb_lim ON without any luck.
You don't say which version of Solaris you are running. I don't believe the version of gunzip included in Solaris 8 is large-file-aware.
Charles Randall crandall@matchlogic.com wrote:
- NFS v2 doesn't support files larger than 2 GB. Switch to NFS v3. Only
very recently has the Linux NFS implementation supported NFS v3 (see
below).
- Can you work with files larger than 2 GB using the same tools on a
local
file system? If so, NFS v3 will help. If not, you'll have to find out how
to
"large file enable" your applications. On Solaris, look at the "largefile" and "lfcompile" manpages.
It's perhaps worth adding that large file support (i.e. > 2 GB) started with Solaris 2.6. Some people run amazingly ancient versions of Solaris...
Aaron Sims Aaron.Sims@netapp.com wrote: [ ... NFS v2 comments omitted ... ]
I've seen this before (at least on Solaris) using NFSv3. Some utilities in
Solaris (like 'ls','tar' etc.) will utilize 32bit versions system calls
like
open(),stat(),read(), etc. In order to do operations on a file larger than
2gb, your program must use open64(),stat64(),etc. I've seen Solaris' 'ls' command cough up hair balls because the file is larger than 2gb. I ran 'truss ls -l largefile' and noticed that 'ls' was using the wrong system calls. Although, on that same system, if I used '/usr/ucb/ls' it worked just fine. Using 'truss' on '/usr/ucb/ls' showed that it was using the correct system calls.
This rather amazes me. Solaris /usr/bin/ls is supposed to have been large-file-aware right from the start of Solaris 2.6, and I've never seen it behave as described above since then.
I *have* observed problems with Solaris ls(1) applied to files larger than 2^40 bytes (the UFS limit) but smaller than 2^43 bytes (the WAFL limit). Of course, even on a filer such files are necessarily sparse! That was some time ago, I'll try and repeat the experiment with up-to-date versions of both Solaris and ONTAP.
I finally found an answer on how to force your application (I'll assume you have source code) to use the 64bit version of the system calls. You need to define _FILE_OFFSET_BITS 64 in your source code. That'll force your app (under Solaris) to use open64(),stat64(), etc. You might want to peruse through your gunzip code to see if it is defined.
This is all described in the lfcompile(5) man page, of course.
Note also that if you are compiling in 64-bit architecture mode (e.g. -xarch=v9 with the SUNWspro compilers) in Solaris 7 or later, then the largefile support comes along with it automatically.
I'm not sure that upgrading to 6.0.1R3 will help if you're using NFSv2.
I don't think it will help if you are using NFSv3 either!
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.