Almost. Each promise U/33 has two controllers which can support two drives each with the same constraints and speed as modern motherboard PIIX4 chipsets.
Thats what I meant even if it may not have been what I said :-) Each card has capacity for 4 drives so put only 2 masters on each card to maximise performance.
linux RAID0 raidtools-0.50beta10-2.
With a recent MMX enabled cpu raid5 really flies so raid 5 would be the way to go, hence my minimum of 3 drives. OTOH linux nfs performance is lousy in the extreme although far better than it was [1].
For the sort of volumes you'd be aiming at with this sort of device you could justify a custom motherboard. Commodity cpu/ram etc. but 4 seperate pci busses each with 4 ultra/33 controllers attached. PCI controller chips are a couple of bucks a piece as are the ide controller chips. With the whole lot soldered on the motherboard you could use the same board from the bottom end right up to the mid range. In volume the board would cost $150, add $100 for cpu and another $200 for ram and you should be well away. Total chassis cost of $500 to which you just need to add between 3 and 16 drives. System cost of $1000-3000. Not at all bad really.
[1] before any one flames me we are a committed linux house, unfortunately linux lacks decent nfs caching from a client point of view (1.5Mb/s off an unloaded 540 accross switched 100bT sucks bigtime kernel v2.2.7) from a server point of view both nfsd and knfsd suck as well. Oh and the lack of decent performance measurement tools are a pain as well. nfsstat -m, iostat etc aren't great but they're sadly missed. Some nice vfs profiling tools would be very handy as well. [sigh] give it a year I suppose.