What can I expect for Gigabit NFS performance between my F630 NetApps and a pretty hefty Linux box, both running Gigabit network devices?
My filer is running:
NetApp Release 5.3.6R2: Sat Aug 5 09:40:44 PDT 2000
The Gigabit board is:
slot 10: Gigabit Ethernet Controller e10 MAC Address: 00:60:cf:20:2b:2f (1000fx)
e10: flags=300043<UP,BROADCAST,RUNNING,TCPCKSUM> mtu 1500 inet 209.41.211.225 netmask 0xffffffc0 broadcast 209.41.211.255 ether 00:60:cf:20:2b:2f (1000fx)
I'm using a Transition Networks 1000Base-SX to 1000Base-TX Media converter into a Linksys 2 Gigabit port / 24 100BT port switch.
The Linux box is running an Intel e1000 server card
e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex
All wiring is CAT 5E.
I mount the filer on Linux as:
mlswna:/ on /backup/mnt/mlswna/rt type nfs (ro,timeo=14,rsize=32768,wsize=32768,nfsvers=3,addr=209.41.211.225)
UDP, NFS3, 32K rsize. (Linux TCP NFS support is not quite ready for prime time.)
I copied a 100MB file in under 10 seconds. That's around 10Mbytes/second.
$ ls -l diskfile -rw-r--r-- 1 34726 34726 104857600 May 9 13:44 diskfile $ time cp diskfile /dev/null 9.85s real 0.00s user 0.31s system $ bs -c 104857600/9.85 10,645,441.6244
I would expect much better than this. The network/switch I'm on has little traffic. The only Gigabit use is the filer and Linux boxes.
What other advice can you give me? What other knobs to turn.
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
I don't know how much performance you should be able to push on an F630, but try cabling your netapp directly to your linux host and enabling jumbo frames (set the mtu to be 9000-something - I forget the exact max mtu for netapp). That should significantly improve performance.
And I don't know what kernel you're running, but we've had very good success with later 2.4.x series using NFS TCP.
Thanks, Matt
-- Matthew Zito GridApp Systems Email: mzito@gridapp.com Cell: 646-220-3551 Phone: 212-358-8211 x 359 http://www.gridapp.com
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Dan OBrien Sent: Thursday, July 10, 2003 3:25 PM To: 'toasters@mathworks.com' Subject: gigabit nfs performance
What can I expect for Gigabit NFS performance between my F630 NetApps and a pretty hefty Linux box, both running Gigabit network devices?
My filer is running:
NetApp Release 5.3.6R2: Sat Aug 5 09:40:44 PDT 2000
The Gigabit board is:
slot 10: Gigabit Ethernet Controller e10 MAC Address: 00:60:cf:20:2b:2f (1000fx)
e10: flags=300043<UP,BROADCAST,RUNNING,TCPCKSUM> mtu 1500 inet 209.41.211.225 netmask 0xffffffc0 broadcast 209.41.211.255 ether 00:60:cf:20:2b:2f (1000fx)
I'm using a Transition Networks 1000Base-SX to 1000Base-TX Media converter into a Linksys 2 Gigabit port / 24 100BT port switch.
The Linux box is running an Intel e1000 server card
e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex
All wiring is CAT 5E.
I mount the filer on Linux as:
mlswna:/ on /backup/mnt/mlswna/rt type nfs (ro,timeo=14,rsize=32768,wsize=32768,nfsvers=3,addr=209.41.211.225)
UDP, NFS3, 32K rsize. (Linux TCP NFS support is not quite ready for prime time.)
I copied a 100MB file in under 10 seconds. That's around 10Mbytes/second.
$ ls -l diskfile -rw-r--r-- 1 34726 34726 104857600 May 9 13:44 diskfile $ time cp diskfile /dev/null 9.85s real 0.00s user 0.31s system $ bs -c 104857600/9.85 10,645,441.6244
I would expect much better than this. The network/switch I'm on has little traffic. The only Gigabit use is the filer and Linux boxes.
What other advice can you give me? What other knobs to turn.
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
Dan,
Do a sanity check with sysstat 1.
See if the CPU is at or near 100% when moving the data. You may have CPU or disk limitation before breaking the 100Mbit barrier.
Joe
At 04:25 PM 7/10/2003 -0400, Dan OBrien wrote:
What can I expect for Gigabit NFS performance between my F630 NetApps and a pretty hefty Linux box, both running Gigabit network devices?
My filer is running:
NetApp Release 5.3.6R2: Sat Aug 5 09:40:44 PDT 2000
The Gigabit board is:
slot 10: Gigabit Ethernet Controller e10 MAC Address: 00:60:cf:20:2b:2f (1000fx)
e10: flags=300043<UP,BROADCAST,RUNNING,TCPCKSUM> mtu 1500 inet 209.41.211.225 netmask 0xffffffc0 broadcast 209.41.211.255 ether 00:60:cf:20:2b:2f (1000fx)
I'm using a Transition Networks 1000Base-SX to 1000Base-TX Media converter into a Linksys 2 Gigabit port / 24 100BT port switch.
The Linux box is running an Intel e1000 server card
e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex
All wiring is CAT 5E.
I mount the filer on Linux as:
mlswna:/ on /backup/mnt/mlswna/rt type nfs (ro,timeo=14,rsize=32768,wsize=32768,nfsvers=3,addr=209.41.211.225)
UDP, NFS3, 32K rsize. (Linux TCP NFS support is not quite ready for prime time.)
I copied a 100MB file in under 10 seconds. That's around 10Mbytes/second.
$ ls -l diskfile -rw-r--r-- 1 34726 34726 104857600 May 9 13:44 diskfile $ time cp diskfile /dev/null 9.85s real 0.00s user 0.31s system $ bs -c 104857600/9.85 10,645,441.6244
I would expect much better than this. The network/switch I'm on has little traffic. The only Gigabit use is the filer and Linux boxes.
What other advice can you give me? What other knobs to turn.
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
Joseph Bishop wrote:
Dan,
Do a sanity check with sysstat 1.
See if the CPU is at or near 100% when moving the data. You may have CPU or disk limitation before breaking the 100Mbit barrier.
My best performance thus far is with client using NFS3, TCP, RSIZE=16384.
mlswna:/ on /backup/mnt/mlswna/rt type nfs (ro,timeo=14,rsize=16384,wsize=16384,nfsvers=3,tcp,addr=209.41.211.225)
-rw-r--r-- 1 34726 34726 524288000 Jul 11 09:15 /backup/mnt/mlswna/rt/home/dobrien1/diskfile500
55.58s real 0.04s user 1.51s system
That's only 9,433,033.46 Bytes/Second.
This is the tail end of sysstat 1 on the filer:
54% 710 0 0 432 12282 11424 0 0 0 1 47% 624 0 0 382 10796 9936 0 0 0 1 47% 647 0 0 396 11206 10350 0 0 0 1 52% 670 0 0 408 11600 10710 0 0 0 1 49% 601 0 0 371 10399 9657 0 0 0 1 49% 596 0 0 366 10315 9504 0 0 0 1 54% 602 0 0 371 10429 10054 460 0 0 1 49% 584 0 0 361 10118 9341 0 0 0 1 39% 458 0 0 282 7927 7324 0 0 0 1 48% 579 0 0 359 10032 9293 0 0 0 1 48% 572 0 0 352 9910 9157 0 0 0 1 33% 417 0 0 259 7225 6698 0 0 0 1 56% 660 0 0 406 11422 10544 0 0 0 0 49% 595 0 0 366 10308 9585 0 0 0 0 39% 488 0 0 301 8455 7803 0 0 0 0 39% 476 0 0 292 8238 7584 0 0 0 0 38% 461 0 0 284 7987 7351 0 0 0 0 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 41% 458 0 0 284 7934 7571 348 0 0 0 42% 516 0 0 320 8940 8280 0 0 0 0 42% 528 0 0 320 9001 8284 0 0 0 0 40% 500 0 0 310 8662 8068 0 0 0 0 47% 565 0 0 348 9788 9001 0 0 0 0 48% 584 0 0 362 10117 9385 0 0 0 0 62% 728 0 0 448 12599 11644 0 0 0 0 40% 483 0 0 298 8368 7727 0 0 0 0 50% 604 0 0 372 10464 9653 0 0 0 0 50% 603 0 0 370 10420 9620 0 0 0 0 45% 534 0 0 331 9251 8564 0 0 0 0 48% 539 0 0 334 9338 8904 364 0 0 0 31% 389 0 0 240 6736 6150 0 0 0 0 1% 0 0 0 0 0 0 0 0 0
I'm pushing the filer around 50% busy.
I've not tweaked MTU yet.
What happens when I set the MTU higher and the filer talks to other system with MTU of only 1500 bytes? Is it negotiated?
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
yep, mtu is negotiated. tcp should be a big win, udp can have only one datagram in flight. If a packet is fragmented then the udp assemply code has to wait for another datagram and udp doesn't support a sliding window like tcp, so in the tcp case multiple packits can be in flight. From what I have read, the real performance of gigE comes when you use jumbo frames, where the max frame size is 9KB, I would try using a MTU for 1.5KB, 4KB, 8KB, 9KB.
Let me know this is cool. I plan on building a gigE testing env for my server this winter. It looks like Opteron is the way to go for Java performance. Did you have time to read that article from acehardware I sent you?
Eric
On Fri, 11 Jul 2003, Dan OBrien wrote:
Joseph Bishop wrote:
Dan,
Do a sanity check with sysstat 1.
See if the CPU is at or near 100% when moving the data. You may have CPU or disk limitation before breaking the 100Mbit barrier.
My best performance thus far is with client using NFS3, TCP, RSIZE=16384.
mlswna:/ on /backup/mnt/mlswna/rt type nfs (ro,timeo=14,rsize=16384,wsize=16384,nfsvers=3,tcp,addr=209.41.211.225)
-rw-r--r-- 1 34726 34726 524288000 Jul 11 09:15 /backup/mnt/mlswna/rt/home/dobrien1/diskfile500
55.58s real 0.04s user 1.51s system
That's only 9,433,033.46 Bytes/Second.
This is the tail end of sysstat 1 on the filer:
54% 710 0 0 432 12282 11424 0 0 0 1 47% 624 0 0 382 10796 9936 0 0 0 1 47% 647 0 0 396 11206 10350 0 0 0 1 52% 670 0 0 408 11600 10710 0 0 0 1 49% 601 0 0 371 10399 9657 0 0 0 1 49% 596 0 0 366 10315 9504 0 0 0 1 54% 602 0 0 371 10429 10054 460 0 0 1 49% 584 0 0 361 10118 9341 0 0 0 1 39% 458 0 0 282 7927 7324 0 0 0 1 48% 579 0 0 359 10032 9293 0 0 0 1 48% 572 0 0 352 9910 9157 0 0 0 1 33% 417 0 0 259 7225 6698 0 0 0 1 56% 660 0 0 406 11422 10544 0 0 0 0 49% 595 0 0 366 10308 9585 0 0 0 0 39% 488 0 0 301 8455 7803 0 0 0 0 39% 476 0 0 292 8238 7584 0 0 0 0 38% 461 0 0 284 7987 7351 0 0 0 0 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 41% 458 0 0 284 7934 7571 348 0 0 0 42% 516 0 0 320 8940 8280 0 0 0 0 42% 528 0 0 320 9001 8284 0 0 0 0 40% 500 0 0 310 8662 8068 0 0 0 0 47% 565 0 0 348 9788 9001 0 0 0 0 48% 584 0 0 362 10117 9385 0 0 0 0 62% 728 0 0 448 12599 11644 0 0 0 0 40% 483 0 0 298 8368 7727 0 0 0 0 50% 604 0 0 372 10464 9653 0 0 0 0 50% 603 0 0 370 10420 9620 0 0 0 0 45% 534 0 0 331 9251 8564 0 0 0 0 48% 539 0 0 334 9338 8904 364 0 0 0 31% 389 0 0 240 6736 6150 0 0 0 0 1% 0 0 0 0 0 0 0 0 0
I'm pushing the filer around 50% busy.
I've not tweaked MTU yet.
What happens when I set the MTU higher and the filer talks to other system with MTU of only 1500 bytes? Is it negotiated?
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
Eric Chet -> echet@Trilegiant.com, ejc@bazzle.com, ejc@kenpo-jujitsu.com Technical Lead/Architect Trilegiant Inc. Distributed OO Systems, J2EE, CORBA Kenpo-JuJitsu the Ultimate in Self Defense, Tracy's System, Tai Chi for Life ejc@FreeBSD.org -> "Live Free or Die"
Dan,
Do you have one thread working? Can you try more than one client? You might find that the client is the bottleneck if only one read thread is occuring.
Cheers,
Joe
Dan OBrien wrote:
Joseph Bishop wrote:
Dan,
Do a sanity check with sysstat 1.
See if the CPU is at or near 100% when moving the data. You may have CPU or disk limitation before breaking the 100Mbit barrier.
My best performance thus far is with client using NFS3, TCP, RSIZE=16384.
mlswna:/ on /backup/mnt/mlswna/rt type nfs (ro,timeo=14,rsize=16384,wsize=16384,nfsvers=3,tcp,addr=209.41.211.225)
-rw-r--r-- 1 34726 34726 524288000 Jul 11 09:15 /backup/mnt/mlswna/rt/home/dobrien1/diskfile500
55.58s real 0.04s user 1.51s system
That's only 9,433,033.46 Bytes/Second.
This is the tail end of sysstat 1 on the filer:
54% 710 0 0 432 12282 11424 0 0 0 1 47% 624 0 0 382 10796 9936 0 0 0 1 47% 647 0 0 396 11206 10350 0 0 0 1 52% 670 0 0 408 11600 10710 0 0 0 1 49% 601 0 0 371 10399 9657 0 0 0 1 49% 596 0 0 366 10315 9504 0 0 0 1 54% 602 0 0 371 10429 10054 460 0 0 1 49% 584 0 0 361 10118 9341 0 0 0 1 39% 458 0 0 282 7927 7324 0 0 0 1 48% 579 0 0 359 10032 9293 0 0 0 1 48% 572 0 0 352 9910 9157 0 0 0 1 33% 417 0 0 259 7225 6698 0 0 0 1 56% 660 0 0 406 11422 10544 0 0 0 0 49% 595 0 0 366 10308 9585 0 0 0 0 39% 488 0 0 301 8455 7803 0 0 0 0 39% 476 0 0 292 8238 7584 0 0 0 0 38% 461 0 0 284 7987 7351 0 0 0 0 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 41% 458 0 0 284 7934 7571 348 0 0 0 42% 516 0 0 320 8940 8280 0 0 0 0 42% 528 0 0 320 9001 8284 0 0 0 0 40% 500 0 0 310 8662 8068 0 0 0 0 47% 565 0 0 348 9788 9001 0 0 0 0 48% 584 0 0 362 10117 9385 0 0 0 0 62% 728 0 0 448 12599 11644 0 0 0 0 40% 483 0 0 298 8368 7727 0 0 0 0 50% 604 0 0 372 10464 9653 0 0 0 0 50% 603 0 0 370 10420 9620 0 0 0 0 45% 534 0 0 331 9251 8564 0 0 0 0 48% 539 0 0 334 9338 8904 364 0 0 0 31% 389 0 0 240 6736 6150 0 0 0 0 1% 0 0 0 0 0 0 0 0 0
I'm pushing the filer around 50% busy.
I've not tweaked MTU yet.
What happens when I set the MTU higher and the filer talks to other system with MTU of only 1500 bytes? Is it negotiated?
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
Joseph Bishop wrote:
Dan,
Do you have one thread working? Can you try more than one client? You might find that the client is the bottleneck if only one read thread is occuring.
Cheers,
Joe
Thanks, Joe. You've hit it on the head. A single cp (copy) from Linux is not enough load to get the F630 pumping out data. I did 3 and 4 copies in parallel and was able to get 12.8GBytes/sec and 13GBytes/sec.
Here's a sampling of sysstat output during the 4 copies:
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 77% 794 0 0 483 13760 12688 0 0 0 0 72% 742 0 0 450 12850 11915 0 0 0 0 72% 723 0 0 437 12508 11960 360 0 0 0 69% 722 0 0 438 12492 11543 52 0 0 0 71% 740 0 0 447 12794 11871 0 0 0 0 64% 683 0 0 416 11835 10904 0 0 0 0
This is not phenomenol, but probably the best I can do on this older filer. F630 running NetApp Release 5.3.6R2. I guess I'm going to have to run more in parallel.
BTW: I can't change the MTU on this Altheon SX card this filer has. It says its not supported, so I'm stuck with an mtusize of 1500.
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
try with the simple dd command, it oftenly get large better result in consuming lot less cpu : dd if=/dev/mem of=/path-to-filer/a-file bs=1024 count=102400 this create a 100Mb file with only one single machine you can sometimes burst to the ethernet theorical limit : i was a ble to transmit with a max throughtput of 12134Ko/s
Dan OBrien wrote:
Joseph Bishop wrote:
Dan,
Do you have one thread working? Can you try more than one client? You might find that the client is the bottleneck if only one read thread is occuring.
Cheers,
Joe
Thanks, Joe. You've hit it on the head. A single cp (copy) from Linux is not enough load to get the F630 pumping out data. I did 3 and 4 copies in parallel and was able to get 12.8GBytes/sec and 13GBytes/sec.
Here's a sampling of sysstat output during the 4 copies:
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 77% 794 0 0 483 13760 12688 0 0 0 0 72% 742 0 0 450 12850 11915 0 0 0 0 72% 723 0 0 437 12508 11960 360 0 0 0 69% 722 0 0 438 12492 11543 52 0 0 0 71% 740 0 0 447 12794 11871 0 0 0 0 64% 683 0 0 416 11835 10904 0 0 0 0
This is not phenomenol, but probably the best I can do on this older filer. F630 running NetApp Release 5.3.6R2. I guess I'm going to have to run more in parallel.
BTW: I can't change the MTU on this Altheon SX card this filer has. It says its not supported, so I'm stuck with an mtusize of 1500.
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
On Thu, 10 Jul 2003, Dan OBrien wrote:
What can I expect for Gigabit NFS performance between my F630 NetApps and a pretty hefty Linux box, both running Gigabit network devices? ....
The best I've been able to do on a Linux box (Red Hat kernel) with an e1000 interface is 16-17Mbps. Here's a test nearly identical to yours; the remote file lives on an F820 running 6.1.3:
[cfst]$ ls -l cse509.tar.gz -rw-r--r-- 1 root wheel 299598195 Aug 9 2001 cse509.tar.gz
[cfst]$ time cp cse509.tar.gz /dev/null
real 0m17.126s user 0m0.029s sys 0m1.652s
[cfst]$ perl -e 'printf "%.2f\n", ((299598195/17.12)/(1024**2))' 16.69
That's about the same total bandwidth I saw *outbound* from the same Linux box (which is an nfs/smb server) when I had a bunch of nfs clients doing simultaneous reads.
--Paul Heinlein heinlein@cse.ogi.edu
This test is from a RH 7.2 box, kernel 2.4.18-19.7.xsmp, dual 2.8 Ghz P4's. Intel Pro 1000F adapter + e1000 driver.
the filer under test is a lightly loaded fas960 in a cluster, ontap 6.4.1. the 960 has four GigE's in a single vif, two FC loops and 28 spindles.
NFSv2 udp 8192 blocks, read : 104.8576 MB in 1.8278 secs, 57.3695 MB/sec NFSv2 udp 16384 blocks, read : 104.8576 MB in 1.6946 secs, 61.8786 MB/sec NFSv2 udp 32768 blocks, read : 104.8576 MB in 1.6966 secs, 61.8031 MB/sec NFSv2 udp 8192 blocks, write : 104.8576 MB in 2.4288 secs, 43.1733 MB/sec NFSv2 udp 16384 blocks, write : 104.8576 MB in 2.5095 secs, 41.7838 MB/sec NFSv2 udp 32768 blocks, write : 104.8576 MB in 2.5437 secs, 41.2220 MB/sec
NFSv3 udp 8192 blocks, read : 104.8576 MB in 1.7548 secs, 59.7553 MB/sec NFSv3 udp 16384 blocks, read : 104.8576 MB in 1.4046 secs, 74.6555 MB/sec NFSv3 udp 32768 blocks, read : 104.8576 MB in 1.1445 secs, 91.6215 MB/sec NFSv3 udp 8192 blocks, write : 104.8576 MB in 2.4799 secs, 42.2826 MB/sec NFSv3 udp 16384 blocks, write : 104.8576 MB in 2.0306 secs, 51.6377 MB/sec NFSv3 udp 32768 blocks, write : 104.8576 MB in 1.7512 secs, 59.8763 MB/sec
NFSv3 tcp 8192 blocks, read : 104.8576 MB in 1.3087 secs, 80.1224 MB/sec NFSv3 tcp 16384 blocks, read : 104.8576 MB in 1.1917 secs, 87.9902 MB/sec NFSv3 tcp 32768 blocks, read : 104.8576 MB in 1.1585 secs, 90.5098 MB/sec NFSv3 tcp 8192 blocks, write : 104.8576 MB in 1.9233 secs, 54.5192 MB/sec NFSv3 tcp 16384 blocks, write : 104.8576 MB in 1.9262 secs, 54.4380 MB/sec NFSv3 tcp 32768 blocks, write : 104.8576 MB in 1.6358 secs, 64.1016 MB/sec
YMMV.
I need to do some poking around, because the NFSv3 32K TCP read rate used to be 100Mbyte/sec... a recent kernel change may have broken somthing.
-skottie
Paul Heinlein wrote:
On Thu, 10 Jul 2003, Dan OBrien wrote:
What can I expect for Gigabit NFS performance between my F630 NetApps and a pretty hefty Linux box, both running Gigabit network devices? ....
The best I've been able to do on a Linux box (Red Hat kernel) with an e1000 interface is 16-17Mbps. Here's a test nearly identical to yours; the remote file lives on an F820 running 6.1.3:
[cfst]$ ls -l cse509.tar.gz -rw-r--r-- 1 root wheel 299598195 Aug 9 2001 cse509.tar.gz
[cfst]$ time cp cse509.tar.gz /dev/null
real 0m17.126s user 0m0.029s sys 0m1.652s
[cfst]$ perl -e 'printf "%.2f\n", ((299598195/17.12)/(1024**2))' 16.69
That's about the same total bandwidth I saw *outbound* from the same Linux box (which is an nfs/smb server) when I had a bunch of nfs clients doing simultaneous reads.
--Paul Heinlein heinlein@cse.ogi.edu
Someone off list suggested I turn off "flow control" and "auto negotiate" on the SX fiber card.
I see "flow control" but not "duplex" type parms.
mlswfsa> ifconfig e10 e10: flags=300043<UP,BROADCAST,RUNNING,TCPCKSUM> mtu 1500 inet 209.41.211.225 netmask 0xffffffc0 broadcast 209.41.211.255 ether 00:60:cf:20:2b:2f (1000fx) mlswfsa> ifconfig usage: ifconfig <interface> [ [ alias | -alias ] <address> ] [ up | down ] [ netmask <mask> ] [ broadcast <address> ] [ mtusize <size> ] [ mediatype <type> ] [ flowcontrol { none | receive | send | full } ] [ trusted | untrusted ] [ wins | -wins ] [ [ partner { <address> | <interface> } ] | [ -partner ] ]
I'm on an older version of OnTap running on an F630 mlswfsa> version NetApp Release 5.3.6R2: Sat Aug 5 09:40:44 PDT 2000
Thanks,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH
Dan OBrien wrote:
Someone off list suggested I turn off "flow control" and "auto negotiate" on the SX fiber card.
I see "flow control" but not "duplex" type parms.
Trying to set flowcontrol gives me an error.
mlswfsa> ifconfig e10 e10: flags=300043<UP,BROADCAST,RUNNING,TCPCKSUM> mtu 1500 inet 209.41.211.225 netmask 0xffffffc0 broadcast 209.41.211.255 ether 00:60:cf:20:2b:2f (1000fx) mlswfsa> ifconfig e10 209.41.211.225 netmask 255.255.255.192 flowcontrol none ifconfig: ioctl (SIOCSFLOWCONTROL): Invalid argument
Do I have to do that before the interface is brought online (i.e., in the rc script)?
Regards,
Dan O'Brien, dmobrien@lcsi.net Cell: 614-783-4859 Work: 614-476-8473 Home: 740-927-2178 Pataskala, OH