This test is from a RH 7.2 box, kernel 2.4.18-19.7.xsmp, dual 2.8 Ghz P4's. Intel Pro 1000F adapter + e1000 driver.
the filer under test is a lightly loaded fas960 in a cluster, ontap 6.4.1. the 960 has four GigE's in a single vif, two FC loops and 28 spindles.
NFSv2 udp 8192 blocks, read : 104.8576 MB in 1.8278 secs, 57.3695 MB/sec NFSv2 udp 16384 blocks, read : 104.8576 MB in 1.6946 secs, 61.8786 MB/sec NFSv2 udp 32768 blocks, read : 104.8576 MB in 1.6966 secs, 61.8031 MB/sec NFSv2 udp 8192 blocks, write : 104.8576 MB in 2.4288 secs, 43.1733 MB/sec NFSv2 udp 16384 blocks, write : 104.8576 MB in 2.5095 secs, 41.7838 MB/sec NFSv2 udp 32768 blocks, write : 104.8576 MB in 2.5437 secs, 41.2220 MB/sec
NFSv3 udp 8192 blocks, read : 104.8576 MB in 1.7548 secs, 59.7553 MB/sec NFSv3 udp 16384 blocks, read : 104.8576 MB in 1.4046 secs, 74.6555 MB/sec NFSv3 udp 32768 blocks, read : 104.8576 MB in 1.1445 secs, 91.6215 MB/sec NFSv3 udp 8192 blocks, write : 104.8576 MB in 2.4799 secs, 42.2826 MB/sec NFSv3 udp 16384 blocks, write : 104.8576 MB in 2.0306 secs, 51.6377 MB/sec NFSv3 udp 32768 blocks, write : 104.8576 MB in 1.7512 secs, 59.8763 MB/sec
NFSv3 tcp 8192 blocks, read : 104.8576 MB in 1.3087 secs, 80.1224 MB/sec NFSv3 tcp 16384 blocks, read : 104.8576 MB in 1.1917 secs, 87.9902 MB/sec NFSv3 tcp 32768 blocks, read : 104.8576 MB in 1.1585 secs, 90.5098 MB/sec NFSv3 tcp 8192 blocks, write : 104.8576 MB in 1.9233 secs, 54.5192 MB/sec NFSv3 tcp 16384 blocks, write : 104.8576 MB in 1.9262 secs, 54.4380 MB/sec NFSv3 tcp 32768 blocks, write : 104.8576 MB in 1.6358 secs, 64.1016 MB/sec
YMMV.
I need to do some poking around, because the NFSv3 32K TCP read rate used to be 100Mbyte/sec... a recent kernel change may have broken somthing.
-skottie
Paul Heinlein wrote:
On Thu, 10 Jul 2003, Dan OBrien wrote:
What can I expect for Gigabit NFS performance between my F630 NetApps and a pretty hefty Linux box, both running Gigabit network devices? ....
The best I've been able to do on a Linux box (Red Hat kernel) with an e1000 interface is 16-17Mbps. Here's a test nearly identical to yours; the remote file lives on an F820 running 6.1.3:
[cfst]$ ls -l cse509.tar.gz -rw-r--r-- 1 root wheel 299598195 Aug 9 2001 cse509.tar.gz
[cfst]$ time cp cse509.tar.gz /dev/null
real 0m17.126s user 0m0.029s sys 0m1.652s
[cfst]$ perl -e 'printf "%.2f\n", ((299598195/17.12)/(1024**2))' 16.69
That's about the same total bandwidth I saw *outbound* from the same Linux box (which is an nfs/smb server) when I had a bunch of nfs clients doing simultaneous reads.
--Paul Heinlein heinlein@cse.ogi.edu