----- Original Message ----- From: "Chris Lamb" skeezics@measurecast.com To: toasters@mathworks.com Sent: Monday, September 11, 2000 9:08 PM Subject: Re: 2 volumes or 1
Search the NOW site for the NFS performance white paper. [0] The systems and numbers are dated (F330 with about 1,000 ops/s!) but Figure 3 "Performance Sensitivity to Number of Disk Drives" is pretty telling. A 14 disk RAID group (12d+1p+1hs) seems the way to go if you are concerned with performance.
Well, that is somewhat outdated now, given that SCSI has pretty much gone the way of the dodo... which is curious, since Ultra160 (and soon Ultra320) would seem to be very competetive with the current generation of FibreChannel drives...
But to turn this thread on a slight tangent, I was curious about the performance advantages of spreading drives within a RAID group across multiple controllers. (Actually, I should probably just return my local SE's phone call, since I was discussing this with him a few weeks back and he said he had some new info... but heck, in the previous thread, three separate Netapp employees jumped on the response, and maybe y'all are interested too. :-) Given that More Disks Is Bettah, the question becomes whether or not it's worth the trouble (on a filer) to try to optimize the physical placement of those drives.
Worth the trouble? No.
However, if you can do it during initial setup, it is Bettah if one RAID group is all on one controller, not spread out like you did it. Which you later found out. (I'm not sure this is true of just FC land; it was probably true with SCSI as well.)
What I find surprising about the F760 is that I've been able to saturate the machine at around 35-38MB/sec sustained writes. (Hey! This ties in with the recent "gigabit ethernet performance" thread :-) I did some informal tests with both a single Sun E4500 (Sbus GbE card) and two Sun 420Rs (PCI GbE cards), running multiple "bonnie" or "dd" or "cpio" sessions to the two separate filer volumes. I was guessing that 32MB of NVRAM would be the bottleneck, but the CPU was pegged at 100%...
Things are highly tuned on Netapp servers so that all resources are used to the fullest extent possible. If you saw that CPU was only 70% with full writes (limited by 32MB NVRAM), you'd want to know why they weren't using the other 30% to improve response times. :)
[4] Imagine my surprise when in testing I found I was pushing nearly 80 MB/sec through my poor old Sbus GbE card, while the PCI cards - in 64-bit, 66Mhz PCI slots - topped out at 52... but that's just out-of-the-box, without tuning. Still, I'm a grumpy old Sun guy, and I'm still not convinced this PCI stuff is all that spiffy. :-P
It's not, but it's not *too* bad, and yet it's a standard everyone can agree on. Rambus ain't that spiffy either, but it looks like it might be in our future (I speak of computing in general, not filers specifically).
Bruce