Well, set the option, and try one client first (umount and remount its file systems with "-o vers=3,proto=udp,rsize=32768,wsize=32768" etc..., so you're certain of what you're getting). If you find it helps performance, then go to town. :-)
Even with the settings as above, I'm getting a worse by about factor of 2-3 performance on Solaris client, compared to Linux clients. Can any1 explain that? I also used Postmark to compare performance.
Yeah. I've seen this as well on 2.7/sparc. Wall clock times were outrageously high for both read and write operations, however when I tried the same on 2.6 or Linux, performance was perfectly acceptable.
I did see a problem with excessive reassembly timeouts, which was peculiar, as I definately didn't miss any packets (though it's possible that something munged a checksum - I didn't have that much time to look at it :) ). Saw it on two 760's, both attached via the intel gbit cards to a catalyst 6509, with the hosts on 100fdx (same vlan, no routing). After looking at this though, they still didn't justify the times we were seeing.
It definately does have something to do with large-packets, so the knee-jerk response from netapp for performance to set nfs.udp.xfersize to 32k was detrimental. By tuning it down to 8k, everything acted normal again.
..kg..