Setting options nfs.udp.xfersize 32768 helped a lot, now were seeing ~25 MB/sec transfers to and from the filer. Why isn't this the default?
Brian
-----Original Message----- From: Smith, Gil [mailto:gil.smith@netapp.com] Sent: Friday, April 02, 1999 1:38 PM To: toasters@mathworks.com Subject: FW: Slow sequential disk reads on F740
-----Original Message----- From: Smith, Gil Sent: Friday, April 02, 1999 4:18 PM To: 'toaster@mathworks.com' Subject: RE: Slow sequential disk reads on F740
Hi,
You might want to try,
filer> options nfs.udp.xfersize 32768
which is a hidden option (doesn't show up when you type options with no arguments).
This might be especially usefull in Gigabit Ethernet environments.
Regards, Gil
-----Original Message----- From: Brian Tao [mailto:taob@risc.org] Sent: Thursday, April 01, 1999 5:02 PM To: Brian Meek Cc: toasters@mathworks.com Subject: RE: Slow sequential disk reads on F740
On Thu, 1 Apr 1999, Brian Meek wrote:
I'm not sure this is related, but if you do a nfsstat -m on the client,
does
it show the rsize to be 8192 or 32768?
Interesting... I was forcing rsize and wsize to 32768 on the mount command line just to be pedantic, but this is what I'm seeing (output edited for brevity):
# mount -o vers=3,proto=udp,rsize=32768,wsize=32768 e2.j:/ /j # mount /j on e2.j:/ vers=3/proto=udp/rsize=32768/wsize=32768/remote on Thu Apr 1 16:51:27 1999 # nfsstat -m /j from e2.j:/ Flags: vers=3,proto=udp,sec=sys,hard,intr,link,symlink,acl,rsize=8192,wsize=8192,re tran s=5
When reading data from our NetApps using our E450's we don't see anywhere near the same performance as when we read data from our E450's with our E450's using NFS3 over udp. I've always attributed this to the client code forcing an rsize of 8192 when connecting to the filler, and allowing a rsize of 32768 when connecting to anything else (Sun, Dec, SGI). Has anyone else seen this?
It sounds like the NFS server dictates the parameters. I can connect with 32K rsize/wsize using UDP transport between Solaris servers, but only 8K between a Sun and an Netapp, and 16K between a Sun and a FreeBSD machine acting as the server. TCP transport allows me to use 32K block sizes in any server.
I am in the process of writing this all up.
Please stay tuned.
Setting options nfs.udp.xfersize 32768 helped a lot, now were seeing ~25 MB/sec transfers to and from the filer. Why isn't this the default?
Brian
-----Original Message----- From: Smith, Gil [mailto:gil.smith@netapp.com] Sent: Friday, April 02, 1999 1:38 PM To: toasters@mathworks.com Subject: FW: Slow sequential disk reads on F740
-----Original Message----- From: Smith, Gil Sent: Friday, April 02, 1999 4:18 PM To: 'toaster@mathworks.com' Subject: RE: Slow sequential disk reads on F740
Hi,
You might want to try,
filer> options nfs.udp.xfersize 32768
which is a hidden option (doesn't show up when you type options with no arguments).
This might be especially usefull in Gigabit Ethernet environments.
Regards, Gil
-----Original Message----- From: Brian Tao [mailto:taob@risc.org] Sent: Thursday, April 01, 1999 5:02 PM To: Brian Meek Cc: toasters@mathworks.com Subject: RE: Slow sequential disk reads on F740
On Thu, 1 Apr 1999, Brian Meek wrote:
I'm not sure this is related, but if you do a nfsstat -m on the client,
does
it show the rsize to be 8192 or 32768?
Interesting... I was forcing rsize and wsize to 32768 on the mount
command line just to be pedantic, but this is what I'm seeing (output edited for brevity):
# mount -o vers=3,proto=udp,rsize=32768,wsize=32768 e2.j:/ /j # mount /j on e2.j:/ vers=3/proto=udp/rsize=32768/wsize=32768/remote on Thu Apr 1 16:51:27 1999 # nfsstat -m /j from e2.j:/ Flags: vers=3,proto=udp,sec=sys,hard,intr,link,symlink,acl,rsize=8192,wsize=8192,re tran s=5
When reading data from our NetApps using our E450's we don't see anywhere near the same performance as when we read data from our E450's with our E450's using NFS3 over udp. I've always attributed this to the client code forcing an rsize of 8192 when connecting to the filler, and allowing a rsize of 32768 when connecting to anything else (Sun, Dec, SGI). Has anyone else seen this?
It sounds like the NFS server dictates the parameters. I can
connect with 32K rsize/wsize using UDP transport between Solaris servers, but only 8K between a Sun and an Netapp, and 16K between a Sun and a FreeBSD machine acting as the server. TCP transport allows me to use 32K block sizes in any server. -- Brian Tao (BT300, taob@risc.org) "Though this be madness, yet there is method in't"
On Fri, 2 Apr 1999, Brian Pawlowski wrote:
I am in the process of writing this all up.
Setting options nfs.udp.xfersize 32768 helped a lot, now were seeing ~25 MB/sec transfers to and from the filer. Why isn't this the default?
In some sick twisted way I remembered a quote I once read in an OS design book. I cite:
A write system call may be broken down into several RPC writes, because each NFS write or read can contain up to 8K of data and UDP packets are limited to 1500 bytes.
The author was obviously assuming Ethernet medium when he stated that UDP packets can be at most 1500 bytes, but, more importantly, the 8K limit probably came from some earlier NFS or RPC standard. I wouldn't be surprised if someone at NAC used that as a safe default.
Tom