Bruce is right... the quote as outlined in the e-mail below is incorrect. The exact quote from the Advanced Administration and Troubleshooting "202" Student Guide Dated April 2000 (Page 8 of the performance tuning section) is:
"There is an approximate 10% decrease in write performance when the filer attempts to write to a RAID group spanning two adapters. This is due to inherent limitations in the PCI bus."
Page 41 of the Performance Tuning Section does not specifically say WRITE performance but page 47 does. I'm sure this issue will be addressed in the next version of the course.
Please note... the statement above may no longer apply to the F840 filer (or later filers) or later releases of Data ONTAP. That statement addresses a specific limitation of the PCI bus on Pre-F800 series filers.
As always .... YMMV.
-----Original Message----- From: Bruce Sterling Woodcock [mailto:sirbruce@ix.netcom.com] Sent: Tuesday, September 12, 2000 4:14 PM To: Todd C. Merrill; Chris Lamb Cc: toasters@mathworks.com Subject: Re: 2 volumes or 1
----- Original Message ----- From: "Todd C. Merrill" tmerrill@mathworks.com To: "Chris Lamb" skeezics@measurecast.com Cc: toasters@mathworks.com Sent: Tuesday, September 12, 2000 10:09 AM Subject: Re: 2 volumes or 1
On Mon, 11 Sep 2000, Bruce Sterling Woodcock wrote:
From: "Chris Lamb" skeezics@measurecast.com
But to turn this thread on a slight tangent, I was curious about the performance advantages of spreading drives within a RAID group across multiple controllers.
[...]
interested too. :-) Given that More Disks Is Bettah, the question
becomes
whether or not it's worth the trouble (on a filer) to try to optimize
the
physical placement of those drives.
Worth the trouble? No.
Worth the trouble? Yes IMHO. The NetApp 202 class notes state:
"A RAID group that spans two different controllers shows a 10% performance degradation."
We've been over this before. This note, as written, is grossly false. The penalty is only on writes, and only noticeable if you're already saturating your NVRAM such that you are writing all the time. There is no constant 10% performance penalty.
Bruce
Bruce is right... the quote as outlined in the e-mail below is incorrect.
The
exact quote from the Advanced Administration and Troubleshooting "202"
Student
Guide Dated April 2000 (Page 8 of the performance tuning section) is:
"There is an approximate 10% decrease in write performance when the filer attempts to write to a RAID group spanning two adapters. This is due to
inherent
limitations in the PCI bus."
Also, since writes are grouped anyway and responded to immediately, it is not like your write from the client takes 10% longer. Only if the filer is so write-loaded that it's contantly writing, such that future CPs are waiting for the previous CP to complete (the CP taking 10% longer), would the 10% become a noticeable factor in performance. At least that is my understanding - there is probably a slight impact before then just from bus contention issues but I don't think it would be meaningful in most environments.
Bruce
On Tue, 12 Sep 2000, Mohler, Anissa wrote:
Bruce is right... the quote as outlined in the e-mail below is incorrect. The exact quote from the Advanced Administration and Troubleshooting "202" Student Guide Dated April 2000 (Page 8 of the performance tuning section) is:
"There is an approximate 10% decrease in write performance when the filer attempts to write to a RAID group spanning two adapters. This is due to inherent limitations in the PCI bus."
Page 41 of the Performance Tuning Section does not specifically say WRITE performance but page 47 does. I'm sure this issue will be addressed in the next version of the course.
I picked my quote from the same version, Chapter "Health and Performance," page 9. That too will need an edit.
Thanks for the clarification, Anissa.
And, as Bruce indicated, I often run into the cp-to-cp consistency points, so I would be affected by this 10% degradation if I had a fractured RAID group. I probably incorrectly assumed the person asking the question was concerned about performance at the top-end, also, where this 10% may have been helpful. Mea culpa.
Until next time...
The Mathworks, Inc. 508-647-7000 x7792 3 Apple Hill Drive, Natick, MA 01760-2098 508-647-7001 FAX tmerrill@mathworks.com http://www.mathworks.com ---
[snip]
And, as Bruce indicated, I often run into the cp-to-cp consistency points, so I would be affected by this 10% degradation if I had a fractured RAID group. I probably incorrectly assumed the person asking the question was concerned about performance at the top-end, also, where this 10% may have been helpful. Mea culpa.
Not concerned, yet anyway. I only wish we were pushing on things that hard. "An unused cycle is a wasted cycle!" Ha ha, no, it's nice to have headroom. :-)
I was mostly curious. Traditional wisdom with RAID is that spreading thing out over lots of disks on lots of controllers is better for performance, but since the filers are horses of a different color, that traditional wisdom might not apply.
So, as it stands, I currently have each volume/RAID group on a separate FC-AL adapter, and all is right with the world. Now, if/when it comes time to upgrade, then we'll have to see if re-balancing things makes sense based on what the new hardware looks like. But thanks for all the input, guys.
-- Chris
P.S. Y'know, it dawned on me that Sun has a Gigabit Ethernet+FC-AL combo PCI card. They use OpenBoot PROMs. Filers use OpenBoot. Hmmm. OEM possibility? (Of course, you have to put that pup in a 66Mhz/64-bit slot; is that a problem on the 700 series? I suppose I should glance at the specs...)
-- Chris Lamb, Unix Guy MeasureCast, Inc. 503-241-1469 x247 skeezics@measurecast.com