"Steve" == Steve Gremban gremban@msp.sc.ti.com writes:
Steve> On an EMC if a disk goes out on one side of a mirror (we'll Steve> call it side A), side A will be offline (all of the disks Steve> in side A, not just the bad drive). Therefore, any new Steve> writes will be seen on side B but not on the drives of side Steve> A and you can't use side A drives to recover from Steve> subsequent side B drive failures. My EMC SE said that only Steve> if both sides of the mirror have simultaneous drive Steve> failures taking them both down at the same time is there a Steve> chance of recovering and that it would take a lot of Steve> work. (this assumes that the bad drives weren't mirrors of Steve> each other)
There are two ways of combining RAID 0 and RAID 1. I call these RAID 0+1 and RAID 1+0, though I've seen people describe what I'd call RAID 0+1 and call it RAID 1+0. But technically, they are different things and have to do with the order in which the operations are applied.
If you mirror all your disk pairs first, and then you stripe across the pairs, you have RAID 1+0. In this case, you can lose up to 50% of your disks, as long as you don't lose both members of a pair.
If you split your disks into two sets, stripe each set, then mirror the two stripe sets, you have RAID 0+1. In this case, losing one disk in a stripe set brings that whole set of disks offline. Losing a disk in the opposite stripe set will in this case fail the entire array.
So, RAID 0+1 = mirror -> stripe -> disk RAID 1+0 = stripe -> mirror -> disk
Your odds of a double-disk failure taking out the array on RAID 0+1 go way up as compared to RAID 1+0 for more than 4 disks. The only advantage I know of using 0+1 is that it allows you to combine disks of different sizes into equal size stripe sets which can then be mirrored.
AFAIK, Solaris Disk Suite 4.2 (maybe earlier) provides 0+1 functionality. That is, you can yank half your disks from your array, as long as you don't yank any two disks of the same mirrored pair. This has been demonstrated empirically; I haven't found SDS documentation that actually says SDS supports this feature.
EMC's RAID-S is a specialized RAID-5 implementation that offloads the XOR parity calculations to the drives (among other things, see: http://esdis-it.gsfc.nasa.gov/MSST/conf1996/C4_3Quinn.html).
What does this have to do with Netapp? :)
j.