On Thu, 1 Apr 1999, Val Bercovici (NetApp) wrote:
Actually, unless we're in degraded mode (meaning a disk has failed and we either have no spare disk to rebuild onto or we're actually in the time window of the process of rebuilding to the hot spare disk) there should be no RAID overhead whatsoever on reads. I'm sure Guy or someone will correct me if I'm wrong here...
How does the Netapp know if there is bad data on reads then? Does it rely on the drive to signal bit errors?
5 data disks or 5 disks including parity (meaning 4 data disks)?
7x9GB drives total, 5 data, 1 parity, 1 hot spare.
Either way, that's well below our sweet spot of 14 9GB disks per raid group. If sequential performance is critical I would obviously consider adding more drives. FYI - I have no idea what our sweet spot is for 18GB drives. Either way. I suspect this is now your bottleneck.
Eek, 14 drives? I find that I run out of CPU cycles or raw throughput before I hit a storage capacity limit. With the 9GB drives being discontinued, having to buy 14x18GB drives (if the sweet spot is the same) because of performance instead of storage seems like a waste to me.
I always suspect NFS client code <g>. Actually, what are your mount options? They may also provide some clues...
On Solaris, "mount -o proto=udp,vers=3". On FreeBSD, "mount -o udpmnt,nfsv3". I believe both OS's default to 32K r/w sizes for NFSv3. Running nfsiod on FreeBSD also makes a huge difference, I found (5.5MB/sec vs. 9MB/sec).