You've probably already checked this, but back from my Tech Support days, we always told folks to manually set speed and duplex on both sides. Speed is less of an issue because if the speed isn't matched, you'd be passing far less good data than what you are seeing, but duplex mis-matches can certainly cause very bad performance over 100Mb/s interfaces.
The other big thing is retransmissions of 32K UDP requests. However, since these are cross-over cables I hope this isn't the issue. I have seen this on very busy switched networks that 32K UDP can really get slow due to nasty retransmission times.
Hope this helps.
-- Adam Fox
I didn't see any satisfactory answers to this the last time
around, but I'm doing a bit of benchmarking for a killer tape backup server (streaming ~60MB/sec to tape). I have an older Sun E450 with 2 x 250-MHz CPU's and a Quad Fast Ethernet NIC in one of the PCI slots.
qfe0 is directly attached to an idle F740, and likewise for qfe1
to another idle F740. Please tell me I should be able to see more than an aggregate 10 MB/s doing dumps over rsh with that setup? I'm shuffling the dump streams off to /dev/null so local disk speed is not an issue. On another system, a 2 x 300-MHz E450 with two single-port NIC's, I'm able to pull about 15 MB/s total. Does the Sun QFE just suck, and I should stick to individual single-port NIC's?
I want to lead up to a 4 x 450-MHz E420R with Gigabit Ethernet,
and hope I can achieve 50 MB/s or more from eight F740's. Has anyone tried this configuration? -- Brian Tao (BT300, taob@risc.org) "Though this be madness, yet there is method in't"