On Thu, 1 Apr 1999, Brian Meek wrote:
I'm not sure this is related, but if you do a nfsstat -m on the client, does it show the rsize to be 8192 or 32768?
Brian Tao and I are exchanging e-mails on this subject, but let me comment on xfer sizes quickly.
1. Until 5.3.1 (forthcoming release) I believe (all preliminary info subject to change:-) we limited UDP transfer sizes to 8KB by default.
We finally tracked down a couple bugs in FDDI - one ours that was being irritated by a vendors FDDI NIC driver bug - that resulted in an interface hang in the face of 32KB UDP transfer sizes.
If you do not have FDDI cards installed in your filer, you can on 4.X and beyond releases, crank the NFS/UDP transfer size to 32768... with the following command:
options nfs.udp.xfersize 32768
Once clients bind to 32KB transfer sizes with a mount, they will always want that transfer size - until they unmount. So if you tweak this line, ADD IT TO YOUR /etc/rc file.
Now, do not enable 32KB transfer sizes in the presence of FDDI/CDDI NICs until the 5.3.1 release or later I suspect.
2. But, let me throw out some caveats.
Some client OS's can bind to large transfer sizes, but their old 100 Mbit/s ethernet hardware is not up to handling read returns of 32KB. So you want to tread lightly if you have some dusty machines sitting around that look like big honking machines (clients), but are actually seriously network challenged.
3. For 100BaseT Ethernet, I would suggest 5.0.1 or beyond to couple to the transfer size tweak a set of ethernet driver performance improvements. The changes primarily affect read performance.
BUT PLEASE!!!! Refer to your support site or contacts for the recommended release for your particular configuration and application!
A lot of you probably know more than me about R releases and such.
4. NFS/TCP is off by default. We see it yields a 10+% drop in aggregate throughput performance compared to NFS/UDP.
Please refer to the following pairs of SFS97 results for numbers on this effect:
http://www.specbench.org/osg/sfs97/results/sfs97-980805-00002.html http://www.specbench.org/osg/sfs97/results/sfs97-980805-00001.html
http://www.specbench.org/osg/sfs97/results/sfs97-981026-00026.html http://www.specbench.org/osg/sfs97/results/sfs97-981026-00025.html
Most vendors have not submitted TCP results?
On a switched, clean LAN, NFS/UDP should be okay to run.
On an unclean LAN (old switches unable to keep up with aggregate loads of more than a few 100BaseT connections, or faulty wiring), you will have performance problems with both NFS/UDP and NFS/TCP. I suggest you resolve any problems in your network instead of hoping that NFS/TCP will save you.
Now, on F540, more so F630 or better filers, you should have no problem seeing 10 - 11 + MB/s sequential reads over NFS from a capable client (Sun Ultra 1's are good minimums) - with 32KB transfer sizes and 5.0.1 or later. If you are on a non-isolated net, and have interference from other clients, you will see confusing results.
It sounds like the NFS server dictates the parameters. I can
connect with 32K rsize/wsize using UDP transport between Solaris servers, but only 8K between a Sun and an Netapp, and 16K between a Sun and a FreeBSD machine acting as the server. TCP transport allows me to use 32K block sizes in any server.
Yeah, we never saw the FDDI hang with NFS/TCP and 32KB transfers so we left that on by default. It was a puzzler.
Questions?