On Mon, 3 Apr 2000, Bruce Sterling Woodcock wrote:
Ahh, okay. So you are using three times as many drives.
Compared to what?
RAID 4.
Assuming 10GB usable space per disk, 10 drives in a RAID 0 configuration gives you 100GB. Both RAID 1+0 and 0+1 would need 20 drives to reach 100GB. RAID 4 and 5 would need 11 drives.
The diagram I saw was misleading. You're right; you use the space on both "sides" of the RAID 1 stripe so really it is twice as much disk space.
As expected, losing the first drive never results in RAID failure, and if you're really lucky, you can lose up to half the drives (as others have mentioned) and still keep humming along. Unlike RAID 4, having more drives in RAID 1+0 helps you withstand more simultaneous drive failures.
No it doesn't, because your initial chance of the first failure is twice as high. Your figures aren't very helpful; what you need is to use the MTBF to find out what your real chance is for a given number of drives.
There's probably a point where the diminishing MTBF on a large pool of drives catches up to you though, but I don't know when that happens.
Right. That's what I'm trying to determine.
I think in the end you might end up with better data protection (although not as much as one would think since your chance of failure is much higher), but it's a heavy premium.
True, these numbers don't translate directly into availability,
since you now have almost twice as many drives to worry about, compared to a RAID 4 of similar capacity. Still, this is better than the 100% chance of a RAID 4 dying with 2 or more broken disks.
I don't think that's necessarily true. But it's possible to create a semi RAID 4+1 configuration by using SnapMirror to regularly mirror one filer onto another. Given what you say, I expect NTAP should offer a RAID 4+1 configuration in the future. It would seem relatively easy to implement on a single filer.
Bruce