On Mon, 3 Apr 2000, Bruce Sterling Woodcock wrote:
Ahh, okay. So you are using three times as many drives.
Compared to what? Assuming 10GB usable space per disk, 10 drives in a RAID 0 configuration gives you 100GB. Both RAID 1+0 and 0+1 would need 20 drives to reach 100GB. RAID 4 and 5 would need 11 drives.
So your chance of a single disk failure is 3n-1, but the chance of a double disk failure in a single stripe is... well, I'm not gonna sit here and calculate it out.
In RAID 1+0, the chance that the next disk failure will result in the catastrophic failure of the entire RAID is F/(D-F), where F is the number of already-failed disks, and D is the total number of disks in the RAID. Thus with 14 drives in a RAID 1+0 configuration:
1st failure = 0/(14-0) = 0.0% chance of broken RAID 2nd failure = 1/(14-1) = 7.7% " " " " 3rd failure = 2/(14-2) = 16.6% " " " " 4th failure = 3/(14-3) = 27.3% " " " " 5th failure = 4/(14-4) = 40.0% " " " " 6th failure = 5/(14-5) = 55.6% " " " " 7th failure = 6/(14-6) = 75.0% " " " " 8th failure = 7/(14-7) = 100.0% " " " "
As expected, losing the first drive never results in RAID failure, and if you're really lucky, you can lose up to half the drives (as others have mentioned) and still keep humming along. Unlike RAID 4, having more drives in RAID 1+0 helps you withstand more simultaneous drive failures. There's probably a point where the diminishing MTBF on a large pool of drives catches up to you though, but I don't know when that happens.
I think in the end you might end up with better data protection (although not as much as one would think since your chance of failure is much higher), but it's a heavy premium.
True, these numbers don't translate directly into availability, since you now have almost twice as many drives to worry about, compared to a RAID 4 of similar capacity. Still, this is better than the 100% chance of a RAID 4 dying with 2 or more broken disks.
Since we don't know how much all the EMC and related stuff cost, it's hard to accurately compare the two solutions. I bet it is much higher, and you said you would prefer the Netapps, which is good because you were sound failry pro-Celerra earlier, but it turns out you were comparing apples and oranges.
Naw, I wasn't trying to compare the two, but rather to suggest a reason why someone might have said that an EMC Celerra is "more scalable" than a Netapp. For my needs, I sure as heck wouldn't trade any of my filers in for an EMC.