I'm evaluating a NAS filer similar to the ones sold by NetApp and Auspex. This particular model is made by Nitech in Irvine, CA. These NFS vendors speak about benchmarks in ops/sec. It's more intuitive for me to think of benchmarks in terms of MB/sec throughputs for a large file write or read.
My quick-and-dirty tests show that I can write at 1.8 MByte/sec over a 100Mbit network using NFS v3. I can FTP at about 7 MByte/sec.
My question is: do these numbers seem reasonable? If not, what aspects of NFS should I tweak?
---matt
p.s. methods:
to make a 100MB file: dd if=/dev/zero of=/tmp/bigfile bs=1024 count=102400
time cp /tmp/bigfile /home/bigfile
(where /home is the NFS server).
On Wed, 4 Aug 1999, Matt Harrington wrote:
I'm evaluating a NAS filer similar to the ones sold by NetApp and Auspex. This particular model is made by Nitech in Irvine, CA. These NFS vendors speak about benchmarks in ops/sec. It's more intuitive for me to think of benchmarks in terms of MB/sec throughputs for a large file write or read.
This is because local storage will always be faster than NAS if the only thing you look at is raw sequential throughput. Throughput to an NFS server will come naturally through the evolution of faster networks, so these vendors are optimizing for transaction speed: if they can unlink 5000 files per second but your local filesystem can only sustain 1000 unlink/sec, NFS will seem faster despite a raw bandwidth disadvantage.
Having said that though...
# time dd if=/dev/zero of=/oracle_backup/bigfile bs=30k count=5000 5000+0 records in 5000+0 records out 0.07u 3.95s 0:15.83 25.3%
# bc -l bc 1.04 Copyright (C) 1991, 1992, 1993, 1994, 1997 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 30*5000/15.83 9475.67909033480732785849
/oracle_backup is a filesystem on an F740 connected to a 1x300-MHz Sun E450 via full-duplex 100baseT. It should be able to hit wire speed with that setup, but for some reason it doesn't. Both boxes are idle, but I haven't done any special tuning (UDP, v3, 32K blocks).
My quick-and-dirty tests show that I can write at 1.8 MByte/sec over a 100Mbit network using NFS v3. I can FTP at about 7 MByte/sec.
My question is: do these numbers seem reasonable? If not, what aspects of NFS should I tweak?
Well, my NFS numbers are already beating your FTP numbers. ;-)
On Aug 05, Brian Tao wrote:
On Wed, 4 Aug 1999, Matt Harrington wrote:
These NFS vendors speak about benchmarks in ops/sec. It's more intuitive for me to think of benchmarks in terms of MB/sec throughputs for a large file write or read.
This is because local storage will always be faster than NAS if
the only thing you look at is raw sequential throughput.
...
[But for file system operations, sometimes] NFS will seem faster despite a raw bandwidth disadvantage.
Always is a long time!
Historically, the wires to disk drives (e.g. SCSI) have always been faster than the wires to networks (e.g. Ethernet), but networking wires have been gaining ground. Gigabit Ethernet has the same raw bandwidth as Fibrechannel. And if you look at the next generation, networking folks are talking about 10 Gb Ethernet, while the storage folks are talking about 2 Gb and 4 Gb Fibrechannel. I admit that TCP/IP imposes an overhead on Gb Ethernet, even though Gb Ether and Fibrechannel have the same raw bandwidth, but when you look at the trend lines, you have to admit that network performance has made amazing gains.
Plus, the raw bandwidth of disk drives is not improving as quickly as either type of wire. So increasingly, the bottleneck is moving out of the wires, and into the disk drive itself. This is especially true if any seeking is involved, but I believe it's even true of raw disk head bandwidth.
I don't want to get in a fight about when exactly the cross-over will occur, or even if it definitely will occur, but given the different trend lines, I think it's worth considering the possibility that NAS really could match local storage, even just in terms of raw disk performance, as opposed to file system ops, and maybe even surpass it.
Amazing and counter-intuitive things can happen when different technologies improve at different rates. When you finally reach a cross over point, as seems to be happening now with networking and storage wires, it can really turn the world upsidedown.
Dave
On Fri, 6 Aug 1999, Dave Hitz wrote:
Always is a long time!
True, although "always" in this industry usually only lasts 36 to 48 months. ;-)
Historically, the wires to disk drives (e.g. SCSI) have always been faster than the wires to networks (e.g. Ethernet), but networking wires have been gaining ground. Gigabit Ethernet has the same raw bandwidth as Fibrechannel.
I agree with you here, but I was considering the whole NAS vs. SAN issue, which is more than just the wires. You can max out the throughput on a 100MB/sec FibreChannel array with much less hardware than achieving the same throughput with NFS over Gigabit, regardless of the theoretical bandwidth capacity of the underlying physical transport.
I admit that TCP/IP imposes an overhead on Gb Ethernet, even though Gb Ether and Fibrechannel have the same raw bandwidth, but when you look at the trend lines, you have to admit that network performance has made amazing gains.
It sure has, to the point where the raw bandwidth available over your copper or glass network isn't the bottleneck anymore. It's the protocols that run over that wire and the hosts at either end of the wire.
Plus, the raw bandwidth of disk drives is not improving as quickly as either type of wire. So increasingly, the bottleneck is moving out of the wires, and into the disk drive itself. This is especially true if any seeking is involved, but I believe it's even true of raw disk head bandwidth.
I don't think there is as great a need for increases in disk platter throughput since you can simply gang as many drives together as you need to achieve the desired bandwidth. Maxing out a SCSI chain's or FC loop's bandwidth has never been a problem as long as you can just add more drives. With individual drives capable of sustaining 15MB/sec or more these days, you'll run out of bus bandwidth or host CPU cycles first.
I don't want to get in a fight about when exactly the cross-over will occur, or even if it definitely will occur, but given the different trend lines, I think it's worth considering the possibility that NAS really could match local storage, even just in terms of raw disk performance, as opposed to file system ops, and maybe even surpass it.
I can't see that happening, unless the line between SAN and NAS blur so much that there is no longer any functional difference (i.e., drives with their own IP address and speak NFS/CIFS/whatever natively, instead of SCSI). The tricks that a Netapp filer uses to speed up access can be effectively applied to local storage, which will always have the advantage of fewer layers (and thus less latency and protocol overhead) between the disk and host. Speed of access to a filesystem over a network will always be limited to how fast the filer itself can access its own drives. That's why I say local storage will always be faster than networked storage.