On Wed 7 Jan, 1930, Brian Tao taob@risc.org wrote:
On Tue, 29 Jun 1999, Brian Pawlowski wrote:
I thought I could use a 2 x 400MHz Ultra Enterprise E250 with a new Sun GbE card in it as a killer client, and I find that with V3 32KB UDP packets, the sucker rolls over at
~30 MB/s
with 2 CPUs pinned at 99% in system time.
This *sucks*.
I didn't see any satisfactory answers to this the last time
Ditto.
<snip>
NIC's, I'm able to pull about 15 MB/s total. Does the Sun QFE just suck, and I should stick to individual single-port NIC's?
Have you tweaked up the high water and low water marks for transmission and reception, out of interest?
What I'm thinking of is that AFAICR tcp_xmit_hiwat and tcp_recv_hiwat are set to 8192 (bytes) and the equivalent *lowat's to 2048. This is predicated on 10Mb/s ethernet networks. Not 100Mb/s contentionless networks. This is true in Sol2.5.1, but I've not checked in other Solarises..
They can be checked with:
ndd /dev/tcp tcp_xmit_hiwat ndd /dev/tcp tcp_recv_hiwat ndd /dev/tcp tcp_xmit_lowat
They can be set higher with the machine up using:
ndd -set /dev/tcp tcp_xmit_hiwat 32768 ndd -set /dev/tcp tcp_recv_hiwat 32768 ndd -set /dev/tcp tcp_xmit_lowat 24576
or with appropriate lines in /etc/system.
UDP can also be checked and set the same way:
ndd /dev/udp udp_xmit_hiwat ndd /dev/udp udp_recv_hiwat ndd /dev/udp udp_xmit_lowat ndd -set /dev/udp udp_xmit_hiwat 32768 ndd -set /dev/udp udp_recv_hiwat 32768 ndd -set /dev/udp udp_xmit_lowat 24576
I'm wondering if people seeking higher NFS performance might want to investigate these settings? (I'd be interested in seeing what people have their nfs:nfsv3_nra set to too 8)
All this is straight from the Sun FDDI 5.0 manual, but we're talking 100Mb/s either way, and I've used these settings before. I'm not in a position to test whether they'll have any benefit - or whether Solarises
2.5.1 choose better defaults to start with.
Perhaps you can check and set me straight either way? I'm guessing but I think there might be a little mileage in setting them higher still for Gb/s networks. But the faster CPU's and better NIC smarts will offset some buffering requirements in your proposed system. At least they *should*. 8)
The FDDI manual goes on to talk about setting the socket options..but that's a whole other kettle of fish.
Using netstat -k <interface> might yield interesting numbers, in particular norcvbuf and noxmtbuf which are the buffer allocation failure counts. In Sol2.6 and on, according to Cockfroft at least.
If the above defaults are in place make a note of the interrupts and any info from vmstat 5, mpstat 5 and netstat -k that looks interesting as the settings stand doing a dump, *then* change them and rerun the test. I've got a bit of an axe to grind about the hme and qfe interfaces on Sun's.. and this would be v. interesting to me.
-- Brian Tao (BT300, taob@risc.org) "Though this be madness, yet there is method in't"
-- End of excerpt from Brian Tao