On Tue 11 Jan, 2000, "Michael S. Keller" mkeller@mail.wcg.net wrote:
tkaczma@gryf.net wrote:
On Tue, 11 Jan 2000, Michael S. Keller wrote:
I have one machine tightened to 512-byte transfers because of poor performance at bigger transfer sizes.
I think you really have to look into your network.
There's not much to check. It goes in one switch port and out another on the same switch. The interfaces show no errors. I do have the filer trunked (EtherChannel) and my news clients have hand-tuned MAC addresses to reduce contention, since the switch does "dumb" switching based on MAC addresses instead of loads.
There's still something rotten in the state of Denmark if you're having to restrict the packet size so very tightly.
The increase in interrupt servicing and buffer-handling at either end, relative to even ethernet mtu sized packets is about 3-fold, so seeing *increased* performance as a result is indicative of something seriously wrong somewhere in there.
Out of interest, can you say what clients, what switches and what settings you're using in each? Just in case any of us have seen problems with those vendors before it might be worth enumerating exactly what client box, interface, OS and patch levels (within reason, hme/wfe, ip, udp, tcp, rpc, nfs, etc. patches) you're using. Ditto for switch, blades, firmware etc..
You said earlier that you're using Sol2.6, and the trunking suggests you're using qfe's, presumably in UEx000 boxes, but I'm sticking my neck out here.
You have piqued my curiosity though!
-- End of excerpt from "Michael S. Keller"
There's still something rotten in the state of Denmark if you're having to restrict the packet size so very tightly.
The increase in interrupt servicing and buffer-handling at either end, relative to even ethernet mtu sized packets is about 3-fold, so seeing *increased* performance as a result is indicative of something seriously wrong somewhere in there.
Out of interest, can you say what clients, what switches and what settings you're using in each? Just in case any of us have seen problems with those vendors before it might be worth enumerating exactly what client box, interface, OS and patch levels (within reason, hme/wfe, ip, udp, tcp, rpc, nfs, etc. patches) you're using. Ditto for switch, blades, firmware etc..
I welcome input.
I have four news servers, two running bCandid's Cyclone, two bCandid's Typhoon. The former does high-speed hauling of news, the latter provides NNTP to end-users. One of the two Typhoon servers I have set at 512-byte transfers.
All four servers are Sun E250s with 1GB RAM each. Each has only one internal 9GB disk. All run Solaris 2.6 loaded with all recommended patches through late December 1999. All have QFE cards.
The switch is a Cisco Catalyst 5505. More detail would take time.
The filers are F760s running ONTAP 5.3.4. I have two filers clustered, but all but one shelf reside primarily on one filer. Both have quad 10/100 cards.
The filers each have two Etherchannel trunks defined, for a total of four virtual interfaces.
The clients use two of their QFE ports for reaching the filers (note that most traffic goes to only one filer). I hand-tuned MAC addresses to avoid conflicts between the Cyclone servers and between the Typhoon servers, to put more even load on the filers' physical ports.
I verified all duplex settings long ago. All run full-duplex. All four clients run NFS v2 on UDP.
What more would help your troubleshooting?
tkaczma@gryf.net writes:
On Tue, 11 Jan 2000, Michael S. Keller wrote:
I verified all duplex settings long ago. All run full-duplex. All four clients run NFS v2 on UDP.
How did you do this on the Suns?
Checking full duplex status of hme on Suns? use "ndd -get /dev/hme link_mode". I think it returns `1' for full duplex. The Answerbook on hme explains all this.
Checking NFS is v2 on UDP? "nfsstat -m".
Mounting NFS v2/UDP; "mount -o vers=2,proto-udp ..."
Or did you mean something else?
On Wed, 12 Jan 2000, Luke Mewburn wrote:
tkaczma@gryf.net writes:
On Tue, 11 Jan 2000, Michael S. Keller wrote:
I verified all duplex settings long ago. All run full-duplex. All four clients run NFS v2 on UDP.
How did you do this on the Suns?
Checking full duplex status of hme on Suns? use "ndd -get /dev/hme link_mode". I think it returns `1' for full duplex. The Answerbook on hme explains all this.
Checking NFS is v2 on UDP? "nfsstat -m".
Mounting NFS v2/UDP; "mount -o vers=2,proto-udp ..."
Or did you mean something else?
No, that is what I wondered about. I think you just made a typo typing it here, it should be "proto=".
I'm still pondering ...
Tom
On Wed, 12 Jan 2000, Luke Mewburn wrote:
Checking full duplex status of hme on Suns? use "ndd -get /dev/hme link_mode". I think it returns `1' for full duplex. The Answerbook on hme explains all this.
And you're sure that it is the same on the NACs and the switch, right?
Tom
tkaczma@gryf.net wrote:
On Wed, 12 Jan 2000, Luke Mewburn wrote:
Checking full duplex status of hme on Suns? use "ndd -get /dev/hme link_mode". I think it returns `1' for full duplex. The Answerbook on hme explains all this.
And you're sure that it is the same on the NACs and the switch, right?
Yes.
tkaczma@gryf.net wrote:
On Tue, 11 Jan 2000, Michael S. Keller wrote:
I verified all duplex settings long ago. All run full-duplex. All four clients run NFS v2 on UDP.
How did you do this on the Suns?
Tom
See Mr. Mewburn's message for the full scope. I set duplex for these clients in /etc/system:
set hme:hme_adv_autoneg_cap=0 set hme:hme_adv_100hdx_cap=0 set hme:hme_adv_100fdx_cap=1
set qfe:qfe_adv_autoneg_cap=0 set qfe:qfe_adv_100hdx_cap=0 set qfe:qfe_adv_100fdx_cap=1
ndd -get confirms it.