Parity overhead in terms of space maybe, but computation goes up linearly at best and might go up higher (reconstructing blocks from parity when a disk blows). Tradeoff time.
Computation time might be negligible to the I/O time, thus restoring parity on 10 drives may not be much slower than on 2. Also with just 2 drives you may not be saturating the pipe meaning that you're wasting time in between operations on the bus with 2 drives. Again, restoring parity on 10 drives might not be much slower than 2 drives.
Check out TR-3027 (http://www.netapp.com/technology/level3/3027.html), Equation 3 in particular. For the equipment described, reconstruction of the 10-disk RAID group (9 data + 1 parity) would take about 82% longer than the 2-disk RAID group (1+1). Obviously a different CPU, different disks, a different I/O subsystem, and many other variables will influence this, but at least you've got one data point.
Custom design again. I don't think you realize how much a board shop will charge you for a custom M-board. Add to it the costs of short run assembly.
The fixed cost of a custom board is higher, but the goal of doing so would be to lower the marginal cost. Low-end units are generally high volume (otherwise they don't make sense) and at some point you manage to amortize the higher fixed cost over enough units for it to make sense.
I'm not saying it can't be done, but why break one's back to suit the low end market with marginal margins (pun intended).
Would you rather have 50% of $1 or 1% of $1,000? If you can get the volumes high enough, then you can make a lot of money even with thin margins and a custom design. The problem is getting the volumes, and that's why NetApp started in the mid-range rather than low-end as many others have tried. Effectively serving the low-end and surviving the effort requires a big investment, in design and tooling and also in marketing, sales, distribution, and support.
-- Karl Swartz - Technical Marketing Engineer Network Appliance Work: kls@netapp.com http://www.netapp.com/ Home: kls@chicago.com http://www.chicago.com/~kls/