Any thoughts on what the Intel/Digital settlement means to NetApp and its customers? It sure looks (to me anyway) as though the Alpha is dead.
+--- In our lifetime, "Gregory M. Paris" paris@bose.com wrote: | | Any thoughts on what the Intel/Digital settlement means to NetApp and | its customers? It sure looks (to me anyway) as though the Alpha is dead.
After seeing the announcement yesterday, I have been wondering this as well.
I don't think Intel will discontinue the Alpha line, as it would not be in it's best interests. Especially given the current microprocessor market. Who is out there that has a good chip?
Intel (and other x86 clones) Intel (Dec) PowerPC (well, who knows what will happen) MIPS (again, who knows what will happen with this) SPARC (the Ultra line is progressing nicely)
There are others out there but they don't seem to be at the fore-front of technology. There are UltraSPARC machines with the PCI bus now.... Perhaps they can run stuff other than Linux and Solaris :)
As for how this all affects the future filers, who knows. I am sure the folks at NetApp are quite in-tune with all of this and how it will affect them (if at all).
I remember when NetApp was talking about the f540, their first venture outside the Intel x86 based filers. We were all concerned about the switch of technology. But look where it has taken us.
I seriously doubt this poses a problem to NetApp's future.
Just my $0.02.
Alexei
| Any thoughts on what the Intel/Digital settlement means to NetApp and | its customers? It sure looks (to me anyway) as though the Alpha is dead.
...
I remember when NetApp was talking about the f540, their first venture outside the Intel x86 based filers. We were all concerned about the switch of technology. But look where it has taken us.
One of the big advantages of the appliance approach is that the CPU makes no difference to the user, so we are free to use the best chip available at the time. (Pop quiz: What chip is in your Cisco router?)
We do our internal development on SPARC chips, because we have a filer simulator that runs as a UNIX process under SunOS, and of course we've shipped both x86 and Alpha products.
As a result, we've already debugged the portability issues associated with 32-bits vs 64-bits, CISC vs RISC, and big-endian vs little-endian. (Actually, CISC vs RISC doesn't really have any portability issues, but it makes the list longer. :-)
If I were developing a general purpose system, I would be very afraid of the Alpha, but for an appliance the choice was easy because the performance, especially for data-moving functions like file service, just can't be beat.
I don't mean to imply that there's no overhead associated with changing chips, because there are compiler issues and test issues and boot PROM issues and performance issues. But in the grand scheme of things, switching chips isn't that big of a deal.
Dave
We do our internal development on SPARC chips, because we have a filer simulator that runs as a UNIX process under SunOS,
...and under Linux on x86, and under BSD/OS on x86, and under Digital UNIX on Alpha; running the simulator on little-endian and/or 64-bit platforms is useful, because otherwise you don't run into some byte-order or 64-bit-cleanliness issues that you will run into when you try running the code on a real filer.
I don't mean to imply that there's no overhead associated with changing chips, because there are compiler issues and test issues and boot PROM issues and performance issues.
And byte-order issues if you plan to allow your disks to move from one system to another. Fortunately for us, Alpha was little-endian, but moving to a processor run in big-endian mode would add some complications. (I don't know how easy it'd be to run an UltraSPARC-based system in little-endian mode, for example; the CPU is bi-endian, but there may be other things we'd have to worry about.)
Thanks for the insight, Dave. You are correct that there is often little reason for concern of what's "under the hood", as long as it has the horsepower to handle the task at hand.
It's much like shopping for a toaster - you just get one that will make 2 slices, 4 slices, or whatever.
However, what if I'm Mr Holiday Inn and I want to put toasters in 300 rooms? I want them all the same for reasonable repair and maintenance budgeting. Differences under to hood show up at the worst possible time - when things are broken. And then, there's always that "toaster envy" if one is nicer than the other.
Perhaps our shop has finally outgrown the stage where appliance oriented filers are a practical solution. On the front end, the price is very attractive, but it looks like I'm about to have to support 4 different architectures under the hood, with only 5 filers in house! The maintenance costs on these 5 comes out to about $65,000 per year for 4-hour response on about 500G of available storage. This looks a bit like an A*sp*x price quote...
Dave Hitz wrote:
One of the big advantages of the appliance approach is that the CPU makes no difference to the user, so we are free to use the best chip available at the time. (Pop quiz: What chip is in your Cisco router?)
We do our internal development on SPARC chips, because we have a filer simulator that runs as a UNIX process under SunOS, and of course we've shipped both x86 and Alpha products.
As a result, we've already debugged the portability issues associated with 32-bits vs 64-bits, CISC vs RISC, and big-endian vs little-endian. (Actually, CISC vs RISC doesn't really have any portability issues, but it makes the list longer. :-)
If I were developing a general purpose system, I would be very afraid of the Alpha, but for an appliance the choice was easy because the performance, especially for data-moving functions like file service, just can't be beat.
I don't mean to imply that there's no overhead associated with changing chips, because there are compiler issues and test issues and boot PROM issues and performance issues. But in the grand scheme of things, switching chips isn't that big of a deal.
Dave
Fritz Feltner:
Thanks for the insight, Dave.
...
However, what if I'm Mr Holiday Inn and I want to put toasters in 300 rooms?
...
Perhaps our shop has finally outgrown the stage where appliance oriented filers are a practical solution.
And thanks for your insight! The issues you raise are very much on my mind these days. You won't be surprised to hear that I feel differently about appliance oriented filers. :-)
Because there are such strong similarities between filers and routers, both in terms of the technology itself, but even more importantly in terms of the underlying philosophy, I often look to Cisco for clues about what should be important to us at Network Appliance.
I believe that Cisco's focus evolved through three stages:
- Appliance (fast, simple, reliable) - Multiprotocol (supporting protocols for multiple OSs) - Network Infrastructure (managing lots of networks and lots of routers)
This analogy helped convince us to support the Windows/NT file service protocol (CIFS). It also makes me believe that our biggest challenge beyond multiprotocol filers is:
- Data Infrastructure (managing lots of data and lots of filers)
I actually believe that the appliance approach offers great advantages in this area. Nobody I've talked with thinks it would simplify their network infrastructure to replace routers with general purpose UNIX or NT servers. (Would Mr. Holiday Inn rather install 300 toasters, or 300 ovens?)
But we've obviously got work to do in this area. We've got many customers with dozens of filers, and quite a few of them have written scripts to assist with filer management. One has a script that connects to each filer every few minutes to gather performance statistics. Several have written scripts to install new software releases on lots of filers at once.
Improved filer monitoring and improved software upgrade are obvious areas for work, but I'm curious in hearing other people's thoughts.
I'm especially interested in hearing about scripts people have written. When people make the effort to develop a tool themselves, it is obviously important to them, and it probably indicates an area that we should address.
What scripts or tools have other people developed?
Dave
What scripts or tools have other people developed?
I have some SNMP tools that send out warning emails/pages when a quota tree hits certain watermarks in terms of inodes and disk usage.
I also have some web-based tools that a customer can use to check his remaining disk space on his quota tree and also buy more space in case he is running low. It gets the stats via snmp and will automatically send a quota resize if the customer increases his tree.
Both of these are simple things that I really don't think NetApp can provide. There are too many site-specific things in something like this.
I'd rather see you guys implement tape storage management so I can use an intelligent hierarchical storage management solution. For example, imagine setting the snapshot percentage to 500% and keeping snapshots around for a month? The 500% would obviously all be on tape and intelligently handled by ADSM support right in the NetApp OS kernel.
Or, use ADSM directly to extend the capacity of a NetApp box to 10's of Terabytes cheaply.
Oh, and while you are at it, come up with a workaround for that Solaris 2.6 multiple IP bug that is biting me right now. ;)
-Rasmus
On Wed 5 Nov, 1997, rasmus@lerdorf.on.ca (Rasmus Lerdorf) wrote:
Oh, and while you are at it, come up with a workaround for that Solaris 2.6 multiple IP bug that is biting me right now. ;)
my suggestion to counter this (untried, unfortunately, as my 2.6 boxes are off the network right now)
ifconfig lo0:1 <alias address> [yes, the loopback] arp -s <alias hostname> <your le0/whatever ethernet address]
then you'll answer arp requests to your alias address (if that's important to you), but outgoing connections from the interface will always pick the native address.
-- jrg.
ifconfig lo0:1 <alias address> [yes, the loopback] arp -s <alias hostname> <your le0/whatever ethernet address]
Aha! Thanks James. This does indeed do the trick.
Can anybody think of any problems aliasing the loopback interface might incur?
-Rasmus
What scripts or tools have other people developed?
I have some SNMP tools that send out warning emails/pages when a quota tree hits certain watermarks in terms of inodes and disk usage.
Heh.. I just worked on something similiar for users here. I took/borrowed the SNMP stuff that MRTG uses and grab quota information with that.
This is an engineering department so it helps more with capacity planning and the like so it's catered to that.
Basic monitoring tools and "planning" tools is what we've been writing for ourselves. I'd be interested in a software repository where we could dump off useful stuff that others might find interesting (Ramus, for example, I'd be interested in seeing your stuff).
Oh, and while you are at it, come up with a workaround for that Solaris 2.6 multiple IP bug that is biting me right now. ;)
There is a ndd parameter you can set so that 2.6 reverts back to 2.5.1 behaviour.
- mz
-- matthew zeier -- mrz@3com.com -- 3Com EWD Engineering -- 408/764-8420 ...................................................................... "Y el mundo se mueve, mas rapido y mejor." - Fey
There is a ndd parameter you can set so that 2.6 reverts back to 2.5.1 behaviour.
That didn't do the trick for me. I had to alias the loopback interface instead of the hme/le interface instead. That was the only way I could get a Solaris 2.6 box with multiple IP's to work with a NetApp mount.
-Rasmus