On Fri, 6 Aug 1999, Dave Hitz wrote:
Always is a long time!
True, although "always" in this industry usually only lasts 36 to 48 months. ;-)
Historically, the wires to disk drives (e.g. SCSI) have always been faster than the wires to networks (e.g. Ethernet), but networking wires have been gaining ground. Gigabit Ethernet has the same raw bandwidth as Fibrechannel.
I agree with you here, but I was considering the whole NAS vs. SAN issue, which is more than just the wires. You can max out the throughput on a 100MB/sec FibreChannel array with much less hardware than achieving the same throughput with NFS over Gigabit, regardless of the theoretical bandwidth capacity of the underlying physical transport.
I admit that TCP/IP imposes an overhead on Gb Ethernet, even though Gb Ether and Fibrechannel have the same raw bandwidth, but when you look at the trend lines, you have to admit that network performance has made amazing gains.
It sure has, to the point where the raw bandwidth available over your copper or glass network isn't the bottleneck anymore. It's the protocols that run over that wire and the hosts at either end of the wire.
Plus, the raw bandwidth of disk drives is not improving as quickly as either type of wire. So increasingly, the bottleneck is moving out of the wires, and into the disk drive itself. This is especially true if any seeking is involved, but I believe it's even true of raw disk head bandwidth.
I don't think there is as great a need for increases in disk platter throughput since you can simply gang as many drives together as you need to achieve the desired bandwidth. Maxing out a SCSI chain's or FC loop's bandwidth has never been a problem as long as you can just add more drives. With individual drives capable of sustaining 15MB/sec or more these days, you'll run out of bus bandwidth or host CPU cycles first.
I don't want to get in a fight about when exactly the cross-over will occur, or even if it definitely will occur, but given the different trend lines, I think it's worth considering the possibility that NAS really could match local storage, even just in terms of raw disk performance, as opposed to file system ops, and maybe even surpass it.
I can't see that happening, unless the line between SAN and NAS blur so much that there is no longer any functional difference (i.e., drives with their own IP address and speak NFS/CIFS/whatever natively, instead of SCSI). The tricks that a Netapp filer uses to speed up access can be effectively applied to local storage, which will always have the advantage of fewer layers (and thus less latency and protocol overhead) between the disk and host. Speed of access to a filesystem over a network will always be limited to how fast the filer itself can access its own drives. That's why I say local storage will always be faster than networked storage.