True, these numbers don't translate directly into availability,
since you now have almost twice as many drives to worry about, compared to a RAID 4 of similar capacity. Still, this is better than the 100% chance of a RAID 4 dying with 2 or more broken disks.
It's a somewhat minor nit, but I feel obliged to point out that while NetApp uses RAID 4, and a RAID 4 set will croak if it suffers a double disk failure, it's not necessarily true that a double disk failure will take out a filer. A sufficiently large volume on a filer will be RAID 4+0 (I think I got the notation right -- it's a concatenation of several RAID 4 group), and you can have multiple volumes. Therefore, you can have multiple disk failures on a filer without losing data so long as there isn't more than one failure in a single RAID 4 group. You could even configure your volumes with a single data disk per RAID group and end up with the same data integrity as RAID 1+0, though we don't optimize for this case so performance may suffer.
Anyone who is interested enough to still be reading this thread might want to read an old tech report of mine, especially section 3:
http://www.netapp.com/tech_library/3027.html
Some of the numbers are dated (they're based on an F630 and our first generation of 9 GB FC-AL disks) and a few pieces of section 3.4 are obsolete, but the basic ideas are still good.
-- Karl Swartz Network Appliance Engineering Work: kls@netapp.com http://www.netapp.com/ Home: kls@chicago.com http://www.chicago.com/~kls/