Interestingly enough, I had a similar situation a few weeks ago on my F760. Here is what I did to solve the problem. It turns out if your filer has a Gig-e card and your clients are 100mb, or certain gig-e cards, particularly in solaris, the filer blasts the data to the 100mb card in such a fashion that it can't keep up. This to me seems odd, however, I was curious on duplex settings as well, so I disproved the network issues, by transfering the same file (made with mkfile to size desired) via ftp. FTP was more than 30x faster than nfs, obviously something wrong there.
I mounted the same dir in a different spot, using udp, but i dropped the rsize and wsize down to 8k. In nfsv2 the default was 8k, and in certain filer versions I believe it changed, but I am not sure. nfsv3 is now 32k instead of 8k. The change was blazingly different. 35 minute file transfer went to <2 min in some cases.
I have similar machines with 100mb cards that don't have the problem. This makes it more confusing. Under normal circumstances you *do* want the larger chunk sizes.... and indeed testing showed dropping most gig-e machines down to 8192 cost some performance. If you can deal with dropping it on every host, set it to 8192 on the filer and mounts will never be anything but 8k or smaller. Clients cannot negotiate higher than the filer is set. I ended up putting local automounts on the specific boxes I had the problems with because the global automount was mounted on varying systems.
Best is to try a manual mount first with -o proto=udp,rsize=8192,wsize=8192 and toy with the settings from there. First disprove any network issues (like duplex mismatches) with ftp's/scp's.
Jerry
--- Ed Marsh edmarsh@ti.com wrote:
If this is a new installation, then you should check your network connection for a port conflict. I recommend that you turn off the auto-negotiate feature and hard set the network port and the filer at 100mbps full-duplex.
-Ed
devnull@adc.idt.com wrote:
Dear All,
I have a F740 running Ontap 6.1.1R2 with 1 Gigabit
and 2 100Mbps ethernet
ports.
I am seeing very poor performance on these hosts
from Sun and Linux
clients, running Solaris 8 and Rh 7.3(2.4.20-ac2)
I am trying to dd a 200M file and i would like to
check if the bottleneck
caused is because of poor disk or network
performance.
Is there an easy way to check disk performance
while i dd the file to a
particular volume(the netapp has 3 volumes and all
3 have equally bad
results)
I dont think that the Solaris and Linux clients are
configured wrong, i
have an F810 that for now seems to be performing
pretty close to my
expectations.
nfs has tcp enabled on both filers.
Anything unsual to look out for doing a snoop etc.
Can anyone also confirm whether default mount on
Solaris clients is tcp or
udp ? I seem to want to think that it is udp if not
specified either in
the NIS maps or mount options.
Thanks,
__________________________________ Do you Yahoo!? The New Yahoo! Shopping - with improved product search http://shopping.yahoo.com