-----Original Message-----
From: Smith, Gil
Sent: Friday, April 02, 1999 4:18 PM
To: 'toaster(a)mathworks.com'
Subject: RE: Slow sequential disk reads on F740
Hi,
You might want to try,
filer> options nfs.udp.xfersize 32768
which is a hidden option (doesn't show up when you type options
with no arguments).
This might be especially usefull in Gigabit Ethernet environments.
Regards,
Gil
-----Original Message-----
From: Brian Tao [mailto:taob@risc.org]
Sent: Thursday, April 01, 1999 5:02 PM
To: Brian Meek
Cc: toasters(a)mathworks.com
Subject: RE: Slow sequential disk reads on F740
On Thu, 1 Apr 1999, Brian Meek wrote:
>
> I'm not sure this is related, but if you do a nfsstat -m on the client, does
> it show the rsize to be 8192 or 32768?
Interesting... I was forcing rsize and wsize to 32768 on the mount
command line just to be pedantic, but this is what I'm seeing (output
edited for brevity):
# mount -o vers=3,proto=udp,rsize=32768,wsize=32768 e2.j:/ /j
# mount
/j on e2.j:/ vers=3/proto=udp/rsize=32768/wsize=32768/remote on Thu Apr 1
16:51:27 1999
# nfsstat -m
/j from e2.j:/
Flags:
vers=3,proto=udp,sec=sys,hard,intr,link,symlink,acl,rsize=8192,wsize=8192,retran
s=5
> When reading data from our NetApps using our E450's we don't see
> anywhere near the same performance as when we read data from our
> E450's with our E450's using NFS3 over udp. I've always attributed
> this to the client code forcing an rsize of 8192 when connecting to
> the filler, and allowing a rsize of 32768 when connecting to
> anything else (Sun, Dec, SGI). Has anyone else seen this?
It sounds like the NFS server dictates the parameters. I can
connect with 32K rsize/wsize using UDP transport between Solaris
servers, but only 8K between a Sun and an Netapp, and 16K between a
Sun and a FreeBSD machine acting as the server. TCP transport allows
me to use 32K block sizes in any server.
--
Brian Tao (BT300, taob(a)risc.org)
"Though this be madness, yet there is method in't"