I've been moving around some of our applications to make better use of our netapps and storage, and as part of this shuffle, some things just gotta be said...
(I will preface this by saying that I really like my filers, both of them work very well in terms of reliability, it is just the nature of the beast to criticize when something doesn't work like you want it to, and be silent when it's doing its job. :)
o The lack of *good* automated, filer-sufficient backup and management tools. NDMP is fine, but the *filer* needs to truly be an appliance, and needs to be able to do this on its own. No rsh monkeying around, just nice consistent, in-the-background, web-managed backup. Follow a schedule, run some "scripted" jobs, SMTP/SNMP trap the results. Not rocket science.
o The lack of widespread tape changer support, or at least tweakable changer support. For various reaons, one of my filers will have 2 ADIC DLT tape changer systems on it, which doesn't do me a speck of good from the filer side, as I still have to issue the commands to swap tapes around. Gag.
o CIFS support has got some big holes.
- The inability to belong to multiple domains. Not all my domains trust each other. Oops, now the filer is essentially unusable for some of them.
- For me, at least, CIFS management with no PDC is non-intuitive at all. I would pay real money for somebody to explain to me how to use autoexnt, a filer with no PDC and no WINS access, and permanently affix a share that is usable by more than one person.
I can mount the share in autoexnt, but it's not accessible to anybody. (I use autoexnt with the interactive option, and the connection is successful, yet I always get "access denied" when trying to access it. However, if I attach it with the same login account as the autoexnt service, with the same username/password pair, it's fine.
The CIFS tracer thingie on the Netapp tells me that both accesses are being mapped to the same Unix user account, but one works, the other doesn't...
I have mailed support this one, but nary a response...
- Even the ability to have a different network interface belong to a different domain would be useful. Not a panacea, but at least useful.
o The general suckiness of NFS now being insufficient to backup the filer with CIFS. Probably not a solveable problem, but it still sucks.
o The apparent lack of CIDR support in routing tables. I realize that routing requirements for a filer are probably not super significant, but something better than /24 would be nice.
o NOW is better, but quicker navigation tools would help. I also do not understand why documentation requires a NOW login.
o I would give a body part for a small filer, with 3 18 or 36GB drives, self contained, no shelves. Medium performance, medium capacity, hot swappable, 1 100MBit ethernet port, just plug and go. 8MB NVRAM, 256MB RAM. I hate adding file servers for that niche where even a small 720 is physically too much hassle, and the price-point is out the roof. Make the drives user-supplied, off a netapp recommended list, and kaching, give me an even dozen. Maybe leave in a slot for a 5 1/4" Tape drive.
Heck, I saw some very scary numbers from consensys for their IDE raid product, use IDE drives instead of scsi in this bad boy, and cut them costs down. WD makes 7200 RPM drives now, at 18GB's a pop, and 5 of 'em fit in a 3 high bay.
(And no, I'm not interested in the philosophical war of IDE vs SCSI. Fact: IDE is currently cheaper than SCSI. Fact: I want this box to be cheap. Check and mate. :))
o It would be cool if config information, perhaps even the OS was on a optional PCMCIA card, that was formatted in a fashion readable on a notebook. Nothing like not having the right serial cable wired up, and having to type all those stupid commands by hand when you make a booboo, and plus the grief of having to boot off two floppies.
o Or add an option to read rc from disk 3 on a DOS formatted floppy.
o Fine, you don't want to use something like PCMCIA cards, then since Netapp only allows its network cards, allow tftp of the rc file from one of the interfaces.
I guess the CIFS stuff is what's torquing me off right now the most, I don't want to have to slap a bunch of storage on some NT boxes when I have this filer sitting there perfectly suited (in theory) to do the job..., and I refuse to purchase 1 filer per domain.
I agree with you on many points, but ...
On Tue, 4 May 1999, Jaye Mathisen wrote:
o CIFS support has got some big holes.
- The inability to belong to multiple domains. Not all my domains trust each other. Oops, now the filer
is essentially unusable for some of them.
Can NT boxes do this?
- Even the ability to have a different network interface belong to a
different domain would be useful. Not a panacea, but at least useful.
Can NT boxes do this as well?
o The general suckiness of NFS now being insufficient to backup the filer with CIFS. Probably not a solveable problem, but it still sucks.
If NFS sucks so much use CIFS only. Apparently they both suck in some ways that are different from another because you ARE using them to complement each other.
Heck, I saw some very scary numbers from consensys for their IDE raid product, use IDE drives instead of scsi in this bad boy, and cut them costs down. WD makes 7200 RPM drives now, at 18GB's a pop, and 5 of 'em fit in a 3 high bay.
The price differences between SCSI and IDE are not that significant anymore, but I was wondering why someone didn't come up with and IDE appliance. The argument that I would come up with is that you can only have 2 devices on one bus, much too few for a RAID 5 set. This also applies to your request for a 3 disk appliance. The power of striping comes from the number of drives. The more drives you add the smaller the parity overhead. With 3 drives 33.3 ... % is "wasted" on parity. With 14 drives the "waste" goes down to about 7%.
(And no, I'm not interested in the philosophical war of IDE vs SCSI. Fact: IDE is currently cheaper than SCSI. Fact: I want this box to be cheap. Check and mate. :))
No mate, and probably not even check. SCSI drives are not THAT much more expensive anymore. You must remember that you can have only 2 IDE drives on the bus, among other limitations. The money you save in drives will probably be made up in the complexity of the controllers and software. Everything costs money, the drives are only a fraction of the cost.
Nothing like not having the right serial cable wired up, and having to type all those stupid commands by hand when you make a booboo, and plus the grief of having to boot off two floppies.
Well, instead of Flash cards carry a serial cable and write macros in your terminal program so you don't have to type commands by hand. Try a terminal program like Telemate or Telix. I assume you're a PC user.
o Or add an option to read rc from disk 3 on a DOS formatted floppy.
But it reads it from the drive array, so what's the point?
o Fine, you don't want to use something like PCMCIA cards, then since Netapp only allows its network cards, allow tftp of the rc file from one of the interfaces.
Why? You should have it on your drives. Besides, you can upload it using the console connection.
I guess the CIFS stuff is what's torquing me off right now the most, I don't want to have to slap a bunch of storage on some NT boxes when I have this filer sitting there perfectly suited (in theory) to do the job..., and I refuse to purchase 1 filer per domain.
Can your NT boxes be each in two different domains at the same time? Talk to B. G. III for using such a decrepid protocol as lanmanager.
Tom
tkaczma@gryf.net wrote:
On Tue, 4 May 1999, Jaye Mathisen wrote:
Heck, I saw some very scary numbers from consensys for their IDE raid product, use IDE drives instead of scsi in this bad boy, and cut them costs down. WD makes 7200 RPM drives now, at 18GB's a pop, and 5 of 'em fit in a 3 high bay.
The price differences between SCSI and IDE are not that significant anymore, but I was wondering why someone didn't come up with and IDE appliance. The argument that I would come up with is that you can only have 2 devices on one bus, much too few for a RAID 5 set. This also applies to your request for a 3 disk appliance. The power of striping comes from the number of drives. The more drives you add the smaller the parity overhead. With 3 drives 33.3 ... % is "wasted" on parity. With 14 drives the "waste" goes down to about 7%.
(And no, I'm not interested in the philosophical war of IDE vs SCSI. Fact: IDE is currently cheaper than SCSI. Fact: I want this box to be cheap. Check and mate. :))
No mate, and probably not even check. SCSI drives are not THAT much more expensive anymore. You must remember that you can have only 2 IDE drives on the bus, among other limitations. The money you save in drives will probably be made up in the complexity of the controllers and software. Everything costs money, the drives are only a fraction of the cost.
Tom
The real issue with SCSI vs IDE is bus utilization.
The SCSI protocols were designed such that the bus could be freed for other users, in-between the time the request for data was sent to the drive, and the time data was returned. Thus commands for data could be sent to several drives, before the time the first drive could reply with the requested data.
The IDE protocols were designed for simplicity, it results in the bus being allocated to one request, for the duration of a single request. It wastes time for simplicity.
Due to the sporatic disk access behavior of workstations, IDE and SCSI drive perform similarly.
However, do to the much more consistant but varied disk access of file servers, the SCSI bus out-performs the IDE bus due to its time efficiency. The only mechanism to combat the IDE bus efficiency would be to have one IDE bus for each drive. Having one IDE bus per drive, would eliminates the minor cost advantage that IDE drives have over SCSI drives.
Note: Another issue that has not been mentioned, is system uptime. I have not seen an IDE drive that has been adapted to support hotplugging. This is critical for online maintenance.
-- Matthew Lee Stier * Fujitsu Network Communications Unix Systems Administrator | Two Blue Hill Plaza Ph: 914-731-2097 Fx: 914-731-2011 | Sixth Floor Matthew.Stier@fnc.fujitsu.com * Pearl River, NY 10965
In toasters@mathworks.com, you wrote:
Having one IDE bus per drive, would eliminates the minor cost advantage that IDE drives have over SCSI drives.
Probably not, opening my trade mag at the first ad I come to the prices are:
8.5G seagate UDMA IDE drive £84 9G Quantum UW SCSI drive £235 Promise ultra33 2 channel ide controller £20
So say we went for the a minimum 3 devices with expansion to 4 we would need 2 promise cards (each card has 2 channels on which we put only 1 drive each) and 3 IDE drives. Total cost <£300. With a scsi system you would be looking at £700 before you added in a scsi controller.
With regard to performance the lower overhead of IDE actually makes them faster in many cases, especially if as here you only use one device per bus. At the high end you have to go scsi/fibre etc, the biggest boards I've seen have had 20 pci slots, split over a number of seperate busses. That effectively limits you to 40 drives. Driving x pci busses and y cards each with z drives on will also get pretty icky.
Chris
So say we went for the a minimum 3 devices with expansion to 4 we would need 2 promise cards (each card has 2 channels on which we put only 1 drive each) and 3 IDE drives. Total cost <£300. With a scsi system you would be looking at £700 before you added in a scsi controller.
Almost. Each promise U/33 has two controllers which can support two drives each with the same constraints and speed as modern motherboard PIIX4 chipsets. Interleaved stripe and/or mirror sets yield 2x individual drive r/w performance:
---- U/33 ---- ---- U/33 ---- ide2 ide3 ide4 ide5 M S M S M S M S S1a S2a S1b S2b S3a S4a S3b S4b
I am getting sustained 20MB/s write, 22MB/s read on the 2x7200RPM,14GB IBM drive set on S1a+S1b, 12.5MB/s write, 15MB/s read on the 2x5400RPM,6.4GB IBM drive set on S2a+S2b.
linux RAID0 raidtools-0.50beta10-2. If you add a second Promise U/33 and make all the drives 14GB IBM's, you get 56GB RAID0+1 for about $2500US retail or $0.043/MB
Note that if the stripes are S1a+S2a, S1b+S2b then speed is 1/2 individual drive.
Rgds, Tim.
With regard to performance the lower overhead of IDE actually makes them faster in many cases, especially if as here you only use one device per bus. At the high end you have to go scsi/fibre etc, the biggest boards I've seen have had 20 pci slots, split over a number of seperate busses. That effectively limits you to 40 drives. Driving x pci busses and y cards each with z drives on will also get pretty icky.
Chris
Chris Good - Muscat Ltd. The Westbrook Centre, Milton Rd, Cambridge UK Phone: 01223 715006 Mobile: 07801 788997 http://www.muscat.com
Almost. Each promise U/33 has two controllers which can support two drives each with the same constraints and speed as modern motherboard PIIX4 chipsets.
Thats what I meant even if it may not have been what I said :-) Each card has capacity for 4 drives so put only 2 masters on each card to maximise performance.
linux RAID0 raidtools-0.50beta10-2.
With a recent MMX enabled cpu raid5 really flies so raid 5 would be the way to go, hence my minimum of 3 drives. OTOH linux nfs performance is lousy in the extreme although far better than it was [1].
For the sort of volumes you'd be aiming at with this sort of device you could justify a custom motherboard. Commodity cpu/ram etc. but 4 seperate pci busses each with 4 ultra/33 controllers attached. PCI controller chips are a couple of bucks a piece as are the ide controller chips. With the whole lot soldered on the motherboard you could use the same board from the bottom end right up to the mid range. In volume the board would cost $150, add $100 for cpu and another $200 for ram and you should be well away. Total chassis cost of $500 to which you just need to add between 3 and 16 drives. System cost of $1000-3000. Not at all bad really.
[1] before any one flames me we are a committed linux house, unfortunately linux lacks decent nfs caching from a client point of view (1.5Mb/s off an unloaded 540 accross switched 100bT sucks bigtime kernel v2.2.7) from a server point of view both nfsd and knfsd suck as well. Oh and the lack of decent performance measurement tools are a pain as well. nfsstat -m, iostat etc aren't great but they're sadly missed. Some nice vfs profiling tools would be very handy as well. [sigh] give it a year I suppose.
On Wed, 5 May 1999 tkaczma@gryf.net wrote:
On Tue, 4 May 1999, Jaye Mathisen wrote:
- The inability to belong to multiple domains. Not all my domains trust each other. Oops, now the filer
is essentially unusable for some of them.
Can NT boxes do this?
I am not sure that the yardstick by which everything is measured should be if NT can or cannot do something.
Who cares if NT can or cannot? The beauty of the filer is that it does 1 things (serve files), fast. But if I'm going to drop $50k-$100k on a box, it would be nice if it could be flexible enough to do what I listed.
o The apparent lack of CIDR support in routing tables. I realize that routing requirements for a filer are probably not super significant, but something better than /24 would be nice.
Fixed in 5.3. (You can put "/<bits>" or "&<mask>" after the destination in the "route" command.) If it doesn't work, it's a bug; let us know about it.