I dunno for sure. Might just be for safety sake in case we install into an error prone network environment. With an Ethernet MTU of 1500, 8192 bytes takes 6 IP fragments for delivery. If any single fragment is dropped along the way for any reason then the entire 8192 NFS block has to be retransmitted. If you use an NFS xfer size of 32768 bytes then it takes *22* IP fragments to deliver the payload. If any of those fragments is lost along the way it is much more overhead on the server, client, and network to handle the retransmission. Maybe even more importantly there is a substantial "quiet time" imposed by NFS in the face of timeouts and retransmissions that will impact througput considerably.
Most recently installed networks and cabling are pretty error free (but you have to keep an eye on them in a consistent fashion like checking the oil in your car). Any non zero values in "ierr" and "oerr" from the output of netstat -i should, in my opinion, should be investigated.
There is a very good (but old) paper written by one of the original designers of Ethernet (Rich Siefert) which has a description of NFS over UDP and Ethernet. Good reading,
http://wwwhost.ots.utexas.edu/ethernet/pdf/techrept13.pdf
The main page is very good,
http://wwwhost.ots.utexas.edu/ethernet/
and Charles Spurgeon's Ethernet book is even better. --Gil
-----Original Message----- From: Brian Meek [mailto:bmeek@flycast.com] Sent: Friday, April 02, 1999 6:25 PM To: toasters@mathworks.com Subject: RE: Slow sequential disk reads on F740
Setting options nfs.udp.xfersize 32768 helped a lot, now were seeing ~25 MB/sec transfers to and from the filer. Why isn't this the default?
Brian
-----Original Message----- From: Smith, Gil [mailto:gil.smith@netapp.com] Sent: Friday, April 02, 1999 1:38 PM To: toasters@mathworks.com Subject: FW: Slow sequential disk reads on F740
-----Original Message----- From: Smith, Gil Sent: Friday, April 02, 1999 4:18 PM To: 'toaster@mathworks.com' Subject: RE: Slow sequential disk reads on F740
Hi,
You might want to try,
filer> options nfs.udp.xfersize 32768
which is a hidden option (doesn't show up when you type options with no arguments).
This might be especially usefull in Gigabit Ethernet environments.
Regards, Gil
-----Original Message----- From: Brian Tao [mailto:taob@risc.org] Sent: Thursday, April 01, 1999 5:02 PM To: Brian Meek Cc: toasters@mathworks.com Subject: RE: Slow sequential disk reads on F740
On Thu, 1 Apr 1999, Brian Meek wrote:
I'm not sure this is related, but if you do a nfsstat -m on the client,
does
it show the rsize to be 8192 or 32768?
Interesting... I was forcing rsize and wsize to 32768 on the mount command line just to be pedantic, but this is what I'm seeing (output edited for brevity):
# mount -o vers=3,proto=udp,rsize=32768,wsize=32768 e2.j:/ /j # mount /j on e2.j:/ vers=3/proto=udp/rsize=32768/wsize=32768/remote on Thu Apr 1 16:51:27 1999 # nfsstat -m /j from e2.j:/ Flags: vers=3,proto=udp,sec=sys,hard,intr,link,symlink,acl,rsize=8192,wsize=8192,re tran s=5
When reading data from our NetApps using our E450's we don't see anywhere near the same performance as when we read data from our E450's with our E450's using NFS3 over udp. I've always attributed this to the client code forcing an rsize of 8192 when connecting to the filler, and allowing a rsize of 32768 when connecting to anything else (Sun, Dec, SGI). Has anyone else seen this?
It sounds like the NFS server dictates the parameters. I can connect with 32K rsize/wsize using UDP transport between Solaris servers, but only 8K between a Sun and an Netapp, and 16K between a Sun and a FreeBSD machine acting as the server. TCP transport allows me to use 32K block sizes in any server.