On Wed 5 May, 1999, tkaczma@gryf.net wrote:
The price differences between SCSI and IDE are not that significant anymore, but I was wondering why someone didn't come up with and IDE
Wrongo. SCSI is still way more expensive than EIDE. It's just the EIDE disk are meant for cheap-and-cheerful low-requirements users and SCSI occupies the middle-high disk market. But we're talking about a VLE filer here, so we can make different assumptions, maybe.
appliance. The argument that I would come up with is that you can only have 2 devices on one bus, much too few for a RAID 5 set. This also
Bzzt. Irrelevant, add more busses because EIDE busses are also quite cheap.
applies to your request for a 3 disk appliance. The power of striping comes from the number of drives. The more drives you add the smaller the parity overhead. With 3 drives 33.3 ... % is "wasted" on parity. With 14 drives the "waste" goes down to about 7%.
Parity overhead in terms of space maybe, but computation goes up linearly at best and might go up higher (reconstructing blocks from parity when a disk blows). Tradeoff time.
(And no, I'm not interested in the philosophical war of IDE vs SCSI. Fact: IDE is currently cheaper than SCSI. Fact: I want this box to be cheap. Check and mate. :))
I agree. But I'd do things slightly differently.
No mate, and probably not even check. SCSI drives are not THAT much more expensive anymore. You must remember that you can have only 2 IDE drives
Oh yeah they are. Here in the UK I spent 360quid on an UW 9GB Seagate (end user price) where a 10GB UDMA EIDE drive cost 190quid (end user price).
on the bus, among other limitations. The money you save in drives will probably be made up in the complexity of the controllers and software. Everything costs money, the drives are only a fraction of the cost.
I've seen PCI-EIDE controllers advertised for about the same as SCSI host adapters, but I'd put them together on the motherboard with an integral network interface (10/100Base-TX).
Performance on the SCSI is a bit better but not *that* much better. I'd bet that a good board designer could easily put 3 EIDE controllers on a PCI bus, giving you (4 devices per EIDE controller, last time I counted) 12 disk connections.
The motherboard could have the EIDE connections, one cheap integral SCSI host adapter for tapes, one network interface and one set of battery-backed memory (just back up all memory - don't have separate NVRAM, we're talking small and compact here).
Given a K6-3 or Celeron and DOT this baby would cost quite a bit less in parts costs than the latest and greatest filers, but wouldn't be an embarrassment to NetApp (of that I'd be quite quietly confident).
You could make the mboard quite small and make the whole thing into a single rack-mount shelf/deskside tower style thing, with the disks presented at the front and the power-supply, motherboard access, serial-line, floppy drive, network connector and SCSI connector round the back.
-- End of excerpt from tkaczma@gryf.net
On a side note, I have seen two vendors who do offer IDE raid solutions. I can't remember who off hand, but they're out there...
On Wed, 5 May 1999, mark wrote:
On Wed 5 May, 1999, tkaczma@gryf.net wrote:
The price differences between SCSI and IDE are not that significant anymore, but I was wondering why someone didn't come up with and IDE
Wrongo. SCSI is still way more expensive than EIDE. It's just the EIDE disk are meant for cheap-and-cheerful low-requirements users and SCSI occupies the middle-high disk market. But we're talking about a VLE filer here, so we can make different assumptions, maybe.
appliance. The argument that I would come up with is that you can only have 2 devices on one bus, much too few for a RAID 5 set. This also
Bzzt. Irrelevant, add more busses because EIDE busses are also quite cheap.
applies to your request for a 3 disk appliance. The power of striping comes from the number of drives. The more drives you add the smaller the parity overhead. With 3 drives 33.3 ... % is "wasted" on parity. With 14 drives the "waste" goes down to about 7%.
Parity overhead in terms of space maybe, but computation goes up linearly at best and might go up higher (reconstructing blocks from parity when a disk blows). Tradeoff time.
(And no, I'm not interested in the philosophical war of IDE vs SCSI. Fact: IDE is currently cheaper than SCSI. Fact: I want this box to be cheap. Check and mate. :))
I agree. But I'd do things slightly differently.
No mate, and probably not even check. SCSI drives are not THAT much more expensive anymore. You must remember that you can have only 2 IDE drives
Oh yeah they are. Here in the UK I spent 360quid on an UW 9GB Seagate (end user price) where a 10GB UDMA EIDE drive cost 190quid (end user price).
on the bus, among other limitations. The money you save in drives will probably be made up in the complexity of the controllers and software. Everything costs money, the drives are only a fraction of the cost.
I've seen PCI-EIDE controllers advertised for about the same as SCSI host adapters, but I'd put them together on the motherboard with an integral network interface (10/100Base-TX).
Performance on the SCSI is a bit better but not *that* much better. I'd bet that a good board designer could easily put 3 EIDE controllers on a PCI bus, giving you (4 devices per EIDE controller, last time I counted) 12 disk connections.
The motherboard could have the EIDE connections, one cheap integral SCSI host adapter for tapes, one network interface and one set of battery-backed memory (just back up all memory - don't have separate NVRAM, we're talking small and compact here).
Given a K6-3 or Celeron and DOT this baby would cost quite a bit less in parts costs than the latest and greatest filers, but wouldn't be an embarrassment to NetApp (of that I'd be quite quietly confident).
You could make the mboard quite small and make the whole thing into a single rack-mount shelf/deskside tower style thing, with the disks presented at the front and the power-supply, motherboard access, serial-line, floppy drive, network connector and SCSI connector round the back.
-- End of excerpt from tkaczma@gryf.net
----------- Fujitsu - Nexion, St. Louis, MO Jay Orr (314) 579-6517
On Wed, 5 May 1999, mark wrote:
Wrongo. SCSI is still way more expensive than EIDE.
Just an example of RETAIL market prices for drives.
9.1GB EIDE Ultra DMA / 66 AC29100 7200RPM 9.5NS 2MB Buffer 3 Year Warranty
$188
9.1GB 3391WD SCSI Wide Differential.
$219
about 15% difference in price. I wouldn't call that much. I think you're a bit out of touch with the market.
Bzzt. Irrelevant, add more busses because EIDE busses are also quite cheap.
Custom design boards cost a lot of money for small quantities. Increased levels of code complexity also costs more money to develop and maintain. Bzzt. Why don't you start a company and do it, if it's so great and you have such a great niche for the product.
Parity overhead in terms of space maybe, but computation goes up linearly at best and might go up higher (reconstructing blocks from parity when a disk blows). Tradeoff time.
Computation time might be negligible to the I/O time, thus restoring parity on 10 drives may not be much slower than on 2. Also with just 2 drives you may not be saturating the pipe meaning that you're wasting time in between operations on the bus with 2 drives. Again, restoring parity on 10 drives might not be much slower than 2 drives.
Oh yeah they are. Here in the UK I spent 360quid on an UW 9GB Seagate (end user price) where a 10GB UDMA EIDE drive cost 190quid (end user price).
See prices above. Just because you chose to pay that much for a drive doesn't prove that that is a fair market price.
I've seen PCI-EIDE controllers advertised for about the same as SCSI host adapters, but I'd put them together on the motherboard with an integral network interface (10/100Base-TX).
Exactly, but with one IDE controller you can have only 4 devices on 2 buses, with 1 SCSI narrow you can have 7 with a wide 15. The speed differences might not be great. I dunno, I haven't looked at the latest speed specs of SCSI vs. ATA.
Performance on the SCSI is a bit better but not *that* much better. I'd bet that a good board designer could easily put 3 EIDE controllers on a PCI bus, giving you (4 devices per EIDE controller, last time I counted) 12 disk connections.
Custom designs cost much more money.
The motherboard could have the EIDE connections, one cheap integral SCSI host adapter for tapes, one network interface and one set of battery-backed memory.
Why SCSI. There are IDE tapes out there.
You could make the mboard quite small and make the whole thing into a single rack-mount shelf/deskside tower style thing, with the disks presented at the front and the power-supply, motherboard access, serial-line, floppy drive, network connector and SCSI connector round the back.
Custom design again. I don't think you realize how much a board shop will charge you for a custom M-board. Add to it the costs of short run assembly.
I'm not saying it can't be done, but why break one's back to suit the low end market with marginal margins (pun intended). Use SCSI and semi-OTSCs and sell your product at a reasonable markup to people who can afford it. For others there are the Cobalts and etc. of this world. Look into Cobalt, you may find what you want. A cheap low-end server.
Tom
Parity overhead in terms of space maybe, but computation goes up linearly at best and might go up higher (reconstructing blocks from parity when a disk blows). Tradeoff time.
Computation time might be negligible to the I/O time, thus restoring parity on 10 drives may not be much slower than on 2. Also with just 2 drives you may not be saturating the pipe meaning that you're wasting time in between operations on the bus with 2 drives. Again, restoring parity on 10 drives might not be much slower than 2 drives.
Check out TR-3027 (http://www.netapp.com/technology/level3/3027.html), Equation 3 in particular. For the equipment described, reconstruction of the 10-disk RAID group (9 data + 1 parity) would take about 82% longer than the 2-disk RAID group (1+1). Obviously a different CPU, different disks, a different I/O subsystem, and many other variables will influence this, but at least you've got one data point.
Custom design again. I don't think you realize how much a board shop will charge you for a custom M-board. Add to it the costs of short run assembly.
The fixed cost of a custom board is higher, but the goal of doing so would be to lower the marginal cost. Low-end units are generally high volume (otherwise they don't make sense) and at some point you manage to amortize the higher fixed cost over enough units for it to make sense.
I'm not saying it can't be done, but why break one's back to suit the low end market with marginal margins (pun intended).
Would you rather have 50% of $1 or 1% of $1,000? If you can get the volumes high enough, then you can make a lot of money even with thin margins and a custom design. The problem is getting the volumes, and that's why NetApp started in the mid-range rather than low-end as many others have tried. Effectively serving the low-end and surviving the effort requires a big investment, in design and tooling and also in marketing, sales, distribution, and support.
-- Karl Swartz - Technical Marketing Engineer Network Appliance Work: kls@netapp.com http://www.netapp.com/ Home: kls@chicago.com http://www.chicago.com/~kls/