On 06/12/99 17:04:02 you wrote:
DFS is a pretty small market. AFS isn't much better, assuming you mean Andrew. If you're talking Apple, now there's a potential market.
Apple toooooo. In fact we have over 400 of those puppies in our dept alone and when we bough the very first FAServer450 (No, not the F540, the 450 with 2 G disks!)
Actually, before the 450 was the 400, which could only hold half as many disks (7). If you wanted 14 you had to buy a second cabinet and run an external SCSI cable to it. I thought the cabinets looked pretty cool, but the 450 was certainly impressive. I've never seen a bigger PC tower case in my life!
Also, for a while 1G disks were supported as well. (Actually, I suspect they still are, if you had one lying around that you could fit!) But by the time Netcom got its first filer in late 1993 2G disks were the norm.
in early '94 that was the very first request I had with their then V.P. On the other hand I have mixed feeling when requesting new features and additional protocol support. One of the key strength of an appliance, in my school, is it's simplicity of usage and administration. When the first FAServer's rolled out it had fewer than 50 commands with a 300 page admin book and hell of a fewer lines of code compared to other traditional UNIX systems.
However, the admin book, while thin, was seriously lacking in a lot of important details!
The appliance is still really simple *except* in the edge cases, and then you have to know exactly what you're doing. This is the trade-off you get for faster response when there's a failure -- you can fix the problem yourself.
As we request all these neat features I wonder how much are we shooting ourselves on the foot.
I think so long as the filer sticks to *file service* and the related protocols it will be fine. For the most part having the other features "around" doesn't hurt performance for the other functions. In other words, having the trunking code or the PC-NFS support doesn't hurt your NFS access performance when you aren't using them (or even when you are). If the filer got into other areas... for example, trying to be a DNS server and a SMTP server on the same box, then I'd be annoyed. (I must reluctantly accept the idea that the filer should be a Domain Controller for NT, simply because that's a necessary component for their file service.)
Once I heard that just adding ATM support on the NIC increased the number of lines of code by an order of magnitude!
I doubt ATM alone was that bad. 10 times, no. 2 times, maybe. :) I don't know exactly how small the 1.0 version code was; I suppose it could have taken up as little as 10% of the original floppy. So today's code might be 10 times bigger now, but I don't think that's all due to ATM. I suspect CIFS support must be a big chunk. But you're still talking a very small part of the filer's overall memory reserve, so bigger executables aren't really a factor in performance.
Most of code size concerns are a maintenance issue. It is all supposedly highly segmented and integrated, however, so all those extra lines of code don't slow down the data path from the network interface down to WAFL appreciably compared to the original code. There is more work to be done in a multiprotocol environment, but it is well worth the tradeoff.
Bruce