I'm interested to hear what others think the shortcomings of the current crop of NetApps are. From my experience:
* Single logical drive * Weedy SCSI performance * Lack of other RAID implementations * Not quite enough redundancy for truly mission critical applications
Don't get me wrong, I love 'em :-)
-marc
--- Marc Nicholas - Hippocampus OSD, Inc. - Eastern Office 416 979 9000 - fax: 416 979 8223 - http://www.hippocampus.net 125 John St. - Suite #100 - Toronto - Ontario - M5V 2E2 - CANADA "Inter/Intra/Extra[net] consulting, corporate access, hardware and software sales"
On Wed, 14 Jan 1998 12:40:47 EST, Marc Nicholas wrote:
I'm interested to hear what others think the shortcomings of the current crop of NetApps are. From my experience:
[snip]
- Not quite enough redundancy for truly mission critical applications
What other types of redundancy would you like to see?
Brett
--- Brett Rabe Email : brett@uswest.net Systems Administrator - U S West Phone : 612.664.3078 600 Stinson Blvd. Pager : 612.613.2549 Minneapolis, MN USA 55413 Fax : 612.664.4770
If you aren't the lead dog, the view is always the same.
On Wed, 14 Jan 1998, Brett Rabe wrote:
On Wed, 14 Jan 1998 12:40:47 EST, Marc Nicholas wrote:
I'm interested to hear what others think the shortcomings of the current crop of NetApps are. From my experience:
[snip]
- Not quite enough redundancy for truly mission critical applications
What other types of redundancy would you like to see?
A better redundant power solution...unless that's changed recently.
Multi-chassis mirroring would also be nice.
Yes, I know I'm asking for the world ;-) But if you don't ask, you don't get...
-marc
--- Marc Nicholas - Hippocampus OSD, Inc. - Eastern Office 416 979 9000 - fax: 416 979 8223 - http://www.hippocampus.net 125 John St. - Suite #100 - Toronto - Ontario - M5V 2E2 - CANADA "Inter/Intra/Extra[net] consulting, corporate access, hardware and software sales"
sendmail told me that Marc Nicholas said:
On Wed, 14 Jan 1998, Brett Rabe wrote:
What other types of redundancy would you like to see?
A better redundant power solution...unless that's changed recently. Multi-chassis mirroring would also be nice. Yes, I know I'm asking for the world ;-) But if you don't ask, you don't get...
As I understand it, you're not asking for the world. Multi-chassis failovering is rumored to be on the way...
I'm interested to hear what others think the shortcomings of the current crop of NetApps are. From my experience:
My biggest beef is that the upgradability of these things is low. It seems that when you buy a unit you are buying a capacity and improvements in disk technology don't raise it. For example, our unit uses 4 gig drives. Its max capacity is 14 4 gig drives. We'd love to put 9 or 18 gig drives in it, but the software doesn't support it. I realize 9 or 18 gig drives would raise rebuild time, but that risk should be up to us.
I also consider the high price per meg to be a shortcoming and because of it we are researching and developing alternatives.
Chris
On Wed, 14 Jan 1998, Chris Caputo wrote:
My biggest beef is that the upgradability of these things is low. It seems that when you buy a unit you are buying a capacity and improvements in disk technology don't raise it. For example, our unit uses 4 gig drives. Its max capacity is 14 4 gig drives. We'd love to put 9 or 18 gig drives in it, but the software doesn't support it. I realize 9 or 18 gig drives would raise rebuild time, but that risk should be up to us.
before i deliver myself of my opinions on upgradability, would anyone from netapp care to comment on whether that's a hardware or "supported configuration" limitation?
Tom Yates - Unix Chap - The Mathworks, Inc. - +1 (508) 647 7561 MAG#65061 DoD#0135 AMA#461546 1024/CFDFDE39 0C E7 46 60 BB 96 87 05 04 BD FB F8 BB 20 C1 8C
Tom Yates says:
before i deliver myself of my opinions on upgradability, would anyone from netapp care to comment on whether that's a hardware or "supported configuration" limitation?
Ah -- a request for input! I assure you that all of us at NetApp watch discussions like this with great interest. :-)
I'm going to answer the question in general, since Chris didn't say what model he's got, and because I can't remember the details for all our models anyway.
First some "philosophy", and then the hardware issues.
Philosophy:
When we very first started, the "appliance" analogy drove us to strongly restrict configurations. Sun old-timers have a saying: "We sell rope." The implication being: "Don't come whining to us if you accidentally hang yourself."
We wanted to avoid accidental hangings. We argued that NetApp should simply disallow configurations that might burn people -- say because RAID reconstruct would take forever and the probability of a second drive failure gets high. I suspect that this position is very reasonable for some markets -- maybe branch-offices or low-end environments with little technical sophistication. But in sophisticated, technical, high-end environments things are different.
We have received the message loud-and-clear from our customers that they want the hard-coded capacity limits removed. That will definitely change, along with some new features that allow higher capacities. (Please bear with me as I delicately avoid any pre-announcements here.)
Hardware Issues
There are a variety of nasty hardware issues that one can encounter in large configurations, and since we've avoided -- up until now -- allowing "dangerous" configs, we haven't done as much testing in those areas as we might.
One issue is that EISA and PCI can both run into trouble with too many different cards operating at high bandwidths. A configuration that causes no problem at low load might get weird at very high loads. This is something that we'll need to invest more in as we remove hard-coded limits. The cabling length limits on SCSI are also annoying and can lead to trouble.
The way that some companies handle this is to specify a maximum "supported" configuration, that they have actually tested, but let customers know that there's nothing to stop them from doing their own experiments beyond this point. Other companies announce maximum "supported" configurations that are larger than anything they've actually tested. The first order becomes an instant beta site. I prefer the former approach to the later.
I'd rather not get real specific right at the moment, but I hope that I've said enough to give you a sense of our thinking moving forward.
Keep the feedback coming. We really do appreciate the input, even if it sometimes seems that it takes a long time to respond.
Dave
On Wed, 14 Jan 1998, Chris Caputo wrote:
|>I'm interested to hear what others think the shortcomings of the current |>crop of NetApps are. From my experience: | |My biggest beef is that the upgradability of these things is low. It |seems that when you buy a unit you are buying a capacity and improvements |in disk technology don't raise it. For example, our unit uses 4 gig |drives. Its max capacity is 14 4 gig drives. We'd love to put 9 or 18 |gig drives in it, but the software doesn't support it. I realize 9 or 18 |gig drives would raise rebuild time, but that risk should be up to us.
Amen to that.
A paste from a sysconfig -r on an F630
1: SEAGATE ST15150W 9107 Size=3.9GB (8388315 blocks)
The ST15150W is an *OLD* drive that Seagate doesn't even list as a current product.....
Some newer drives please? 18gig drives please? It *can't* be that hard to allow 18gig drives...if it is your programmers didn't do their job correctly.
|I also consider the high price per meg to be a shortcoming and because of |it we are researching and developing alternatives.
Just negotiate hard with Netapp. I still feel screwed everytime I want an upgrade from netapp. The above drive model is $355 through disribution...netapp quoted me $1700/drive.
Their NVRAM pricing is LUDICRIOUS considering its 3rd party NVRAM bought from Dallas semiconductor. Ditto for DRAM.
Netapp:
Make your money on your wonderful software, and stop trying to price gouge your customers on *off the shelf* items like Seagate Hard DRives, Samsung DRAM, and Dallas Semiconductor NVRAM.
We really aren't that stupid I promise.
Jonah
Jonah Barron Yokubaitis | Austin|San Antonio|Houston President | Dallas|Fort Worth|Boerne Texas.Net | Georgetown|Dripping Springs http://www.texas.net | Making 56k affordable
Some newer drives please? 18gig drives please? It *can't* be that hard to allow 18gig drives...if it is your programmers didn't do their job correctly.
It's not a matter of programming -- going from 4GB to 9GB was painless in that regard, and 18GB drives would be no different. The problem is in qualifying drives to our standards of reliability (and possibly getting them in sufficient quantity).
Before I joined NetApp, I was a customer. We wanted 9GB drives because we had some very large physics datasets and didn't want an enormous number of drives. NetApp would only sell us 4GB drives, saying that they had tried the then-current 9GB drives (5.25" full-height) and found them woefully lacking in reliability.
NetApp lost that business. We bought IBM RS/6000s and third-party boxes loaded with 9GB drives. We weren't that worried about data loss because it was all either staging space for tape drives, or work space for analysis work that could be restarted without undue pain.
We quickly found that we were spending a lot of our time replacing 9GB drives, and restarting physics apps whose output was lost. Next time we bought additional storage, we bought filers -- still with 4GB drives, but they worked and we were a lot happier.
-- Karl Swartz - Technical Marketing Engineer Network Appliance kls@netapp.com (W) kls@chicago.com (H)
On Wed, Jan 14, 1998 at 12:40:47PM -0500, Marc Nicholas had written:
I'm interested to hear what others think the shortcomings of the current crop of NetApps are. From my experience:
- Single logical drive
Hassle but can be overcome with the quota tree implementation.
You can even dump seperate 'tree's.
- Weedy SCSI performance
How so?
- Lack of other RAID implementations
Uhm, what else do you want to do?
- Not quite enough redundancy for truly mission critical applications
Hmm... You can have a full box on the shelf easily enough, and multiple hot spares, etc.
Not like they fail often anyway.
On Wed, 14 Jan 1998, Mike Horwath wrote:
- Single logical drive
Hassle but can be overcome with the quota tree implementation.
You can even dump seperate 'tree's.
True. But distinct separate RAID volumes with their own hot spares would be nice.
- Weedy SCSI performance
How so?
We're living in the age of Wide Ultra SCSI. Maybe a NetApp engineer can comment on whether this would, or would not, make much of a real world performance. I was under the impression that you'd get more TPS...
- Lack of other RAID implementations
Uhm, what else do you want to do?
Straight mirrors....straight concatenating and striping...
As I said earlier: I still love NetApps and I'm asking the world ;-)
-marc
--- Marc Nicholas - Hippocampus OSD, Inc. - Eastern Office 416 979 9000 - fax: 416 979 8223 - http://www.hippocampus.net 125 John St. - Suite #100 - Toronto - Ontario - M5V 2E2 - CANADA "Inter/Intra/Extra[net] consulting, corporate access, hardware and software sales"
On Wed, 14 Jan 1998 15:16:36 EST, Marc Nicholas wrote:
On Wed, 14 Jan 1998, Mike Horwath wrote:
- Single logical drive
Hassle but can be overcome with the quota tree implementation.
You can even dump seperate 'tree's.
True. But distinct separate RAID volumes with their own hot spares would be nice.
If only to reduce the reconstruction time for lost volumes....
- Weedy SCSI performance
How so?
We're living in the age of Wide Ultra SCSI. Maybe a NetApp engineer can comment on whether this would, or would not, make much of a real world performance. I was under the impression that you'd get more TPS...
- Lack of other RAID implementations
Uhm, what else do you want to do?
Straight mirrors....straight concatenating and striping...
So you'd want, essentially, two separate raid implementations? You'd want the existing Raid 4 protection as well as a pseudo- Raid 1 mirroring implementation to another separate NetApp chassis?
Huh. Two thoughts. One -- overkill. Two -- you've got deeper pockets than I do. :-)
As I said earlier: I still love NetApps and I'm asking the world ;-)
Nothing wrong with that. Yer a consumer, they're a provider.
Brett
--- Brett Rabe Email : brett@uswest.net Systems Administrator - U S West Phone : 612.664.3078 600 Stinson Blvd. Pager : 612.613.2549 Minneapolis, MN USA 55413 Fax : 612.664.4770
If you aren't the lead dog, the view is always the same.
On Wed, 14 Jan 1998, Brett Rabe wrote:
True. But distinct separate RAID volumes with their own hot spares would be nice.
If only to reduce the reconstruction time for lost volumes....
Yes, that's especially of interest to those who have large 630 applications...
Straight mirrors....straight concatenating and striping...
So you'd want, essentially, two separate raid implementations?
Yup.
You'd want the existing Raid 4 protection as well as a pseudo- Raid 1 mirroring implementation to another separate NetApp chassis?
That would be nice.
Huh. Two thoughts. One -- overkill. Two -- you've got deeper pockets than I do. :-)
Why overkill? Seriously, some data is that important and some downtime scenarios are that costly. Believe me.
We're working on a project right now infact where losing data or data access for *minutes* would be a disaster.
As I said earlier: I still love NetApps and I'm asking the world ;-)
Nothing wrong with that. Yer a consumer, they're a provider.
:-)
-marc
--- Marc Nicholas - Hippocampus OSD, Inc. - Eastern Office 416 979 9000 - fax: 416 979 8223 - http://www.hippocampus.net 125 John St. - Suite #100 - Toronto - Ontario - M5V 2E2 - CANADA "Inter/Intra/Extra[net] consulting, corporate access, hardware and software sales"
On Wed, 14 Jan 1998, Marc Nicholas wrote:
True. But distinct separate RAID volumes with their own hot spares would be nice.
You say you ask for the world, but rumour has it you may get what you ask for in 1998. Some of the rumblings I hear coming out for the high-end Netapps sound *very* sexy, and should address almost all high-availability and failover circumstances.
We're living in the age of Wide Ultra SCSI. Maybe a NetApp engineer can comment on whether this would, or would not, make much of a real world performance. I was under the impression that you'd get more TPS...
Wouldn't much of that advantage be masked by the caches?
Straight mirrors....straight concatenating and striping...
Mirroring within a RAID set and cross-chassis mirroring have been two items near the top of our wish list, having migrated from Ultra servers with SparcStorage Arrays. On paper, two Ultras cross- connected via fiber-channel to two SSA's configured to mirror disk sets across two chassis (chasses?) offer much better survivability than a single Netapp. Now if only Netapp could take that idea and implement it in a bulletproof, no-brainer fashion...
+----- On Wed, 14 Jan 1998 15:16:36 EST, Marc Nicholas writes: | On Wed, 14 Jan 1998, Mike Horwath wrote: | | > > * Single logical drive | > | > Hassle but can be overcome with the quota tree implementation. | > | > You can even dump seperate 'tree's. | | True. But distinct separate RAID volumes with their own hot spares would | be nice.
Take a look at the latest SPECsfs97 pages for the 630, under special config notes "two RAID groups (13 disks each)". OS 5.0 beta has what you are asking for.
/Michael
| > > * Single logical drive | True. But distinct separate RAID volumes with their own hot spares would | be nice. Take a look at the latest SPECsfs97 pages for the 630, under special config notes "two RAID groups (13 disks each)". OS 5.0 beta has what you are asking for.
Hey! That's cheating. You are not allowed to have noticed that.
Dave
| True. But distinct separate RAID volumes with their own hot spares would | be nice.
Take a look at the latest SPECsfs97 pages for the 630, under special config notes "two RAID groups (13 disks each)". OS 5.0 beta has what you are asking for.
Well, again, I don't want to pre-announce anything, but note that the special config notes say nothing about hot spares; there could, for example, be one single pool of hot spares.
Were we to provide distinct separate RAID volumes, made up of one or more RAID groups, would people want:
1) a single pool of hot spares, so that if any disk dies, if there's a hot spare, reconstruction can start immediately?
2) a pool of hot spares per RAID group?
3) a pool of hot spares per volume?
4) some other assignment of hot spares?
+--- In our lifetime, guy@netapp.com (Guy Harris) wrote: | | Were we to provide distinct separate RAID volumes, made up of one or | more RAID groups, would people want: | | 1) a single pool of hot spares, so that if any disk dies, if | there's a hot spare, reconstruction can start immediately?
I prefer this. Given the odds of disks failing, I would rather have a single pool with 2 or 3 drives and simply swap failed disks when I get a chance (since I would still have spare hot-spares).
| 2) a pool of hot spares per RAID group? | 3) a pool of hot spares per volume?
Too much overhead (in my mind). If you want to cover your butt, you would probably want 2 spares per group/volume at least. That can work out to be a whole lot of disks.
Different people have different needs.
How hard would it be to do all of the above?
Under Solaris Disk Suite, you can have multiple pools of hot spares. You simply associate a volume with the pool you want it to use.
I think this would at least cover most bases :)
Alexei
| Were we to provide distinct separate RAID volumes, made up of one or | more RAID groups, would people want:
...
How hard would it be to do all of the above?
How hard to implement? Not very.
However, offering multiple choices carries other costs. How much complexity would we add to the user interface? How many new pages to the documentation set? How many new fields for the FilerView GUI? How many phone calls from customers whose filesystems weren't protected because they chose the wrong option?
So our philosophy is to stick with the simplest and most general solution until legitimate customer requirements drive us to more complexity. "When in doubt, leave it out." And even in the case of legitimate requirements, we must balance the benefit to the customers who need it, against the complexity to those who don't. If few people have the need, it may not be worth the complexity for the rest.
This is, of course, a very difficult line to walk. Many of our most spirited internal debates revolve around exactly these issues. And since it's SO hard to remove complexity, once added, I fight for the side of simplicity whenever we're not sure.
Dave
On Thu, 15 Jan 1998, Guy Harris wrote:
Were we to provide distinct separate RAID volumes, made up of one or more RAID groups, would people want:
a single pool of hot spares, so that if any disk dies, if there's a hot spare, reconstruction can start immediately?
a pool of hot spares per RAID group?
a pool of hot spares per volume?
some other assignment of hot spares?
for preference, (1) - easiest to configure, most appliance-like! if i were seeking anything on top of that, it would be the ability to reserve a hot-spare to a particular group and/or volume, so that if i had one especially-important group or volume, i could be sure it would always have a hot spare.
Tom Yates - Unix Chap - The Mathworks, Inc. - +1 (508) 647 7561 MAG#65061 DoD#0135 AMA#461546 1024/CFDFDE39 0C E7 46 60 BB 96 87 05 04 BD FB F8 BB 20 C1 8C
- Weedy SCSI performance
How so?
We're living in the age of Wide Ultra SCSI. Maybe a NetApp engineer can comment on whether this would, or would not, make much of a real world performance. I was under the impression that you'd get more TPS...
Faster disks only improve performance if disks are the bottleneck. Otherwise, they had expense and potentially reduce reliability (due to greater heat as well as being closer to the bleeding edge). We design our filers so everything runs out at about the same time. That means you're not paying for hardware which gives you no benefit. Alternately, it means upgrading just one component probably won't accomplish much.
One case I'm familiar with is the F330. Customers asked why we were selling them with Hawks when other vendors were shipping Barracudas. We tried some Barracudas in one. As expected, the performance change was a good approximation of zero, so putting in Barracudas would be pointless. Customers also asked why we only put in a 90 MHz Pentium. We tried a clock-doubled one and it, too, had virtually no effect. It turns out that for most uses, the PCI bus seems to be the bottleneck in the F330. This is mostly due to the Neptune (?) chipset used on the motherboard -- it was decent for our goals at the time, but newer designs do much better.
- Lack of other RAID implementations
Uhm, what else do you want to do?
Straight mirrors....straight concatenating and striping...
You can in fact do both on a filer, albeit with a few restrictions, and neither is something we support.
For straight mirroring, just use one data drive per RAID-4 group. The parity drive is in effect a mirror, and our xor code is clever enough to recognize this degenerate case. That does of course put a severe limit on how big your file system can be, though as someone else noted, careful examination of our SFS 2 results may be enlightening in this regard.
Straight concatenation and striping is a bit hokier. For this, build up your system and then yank the parity drive. The system will run in degraded mode, but since it's the parity drive that's missing, it will simply note when writing that it doesn't need to bother doing all the xor calculations. Thus, it should be somewhat faster for writes, and no worse for reads. *I* wouldn't want to run that way, but you can if you like.
(At one point, we considered running a benchmark on a machine with just a parity disk -- you know, we can beat vendor X even with one hand tied behind our back. Never got around to doing it, though.)
-- Karl Swartz - Technical Marketing Engineer Network Appliance kls@netapp.com (W) kls@chicago.com (H)
+--- In our lifetime, Marc Nicholas marc@hippocampus.net wrote: | | I'm interested to hear what others think the shortcomings of the current | crop of NetApps are. From my experience: | | * Single logical drive
Hmm. We see this as a huge benefit. I like not having to juggle file system sizes.
| * Weedy SCSI performance
Not sure I understand what you mean. Compared to what, and in what way?
| * Lack of other RAID implementations
You mean like 0,0+1,etc? I think you need to read some of the white papers :)
If you are trying to make a comparison to something like ODS or an Auspex (stripping, concatenated stripes, raid5 (not on an Auspex you don't), mirroring), then that is not a good comparisson.
The "limitations" of the NetApp's are (IMHO) what make it so good. Not having to tweak and monitor 500 different parameters is most nice.
| * Not quite enough redundancy for truly mission critical applications
Such as HA, fail-over, etc? This is a valid concern but one that begs to be analyzed.
At first HA sounds great. The ability to have 1 machine completely die and have another pick up where the first left off (without any percieved interuption of service). The problem shows itself when the cost associated with this level of availability is determined. If you have 99.5% availability, what is the $$$ associated with the additional 0.5%? Then the bean counters take over... :)
I am looking forward to seeing the NetApp product continue to grow and address these concerns.
| Don't get me wrong, I love 'em :-)
As do we. I cannot wait to replace my f540's with f630's.
Alexei
On Wed, 14 Jan 1998, Alexei Rodriguez wrote:
Hmm. We see this as a huge benefit. I like not having to juggle file system sizes.
Multiple RAID-4 sets mean multiple parity drives (and thus better survivability for multi-disk failures on one Netapp), faster reconstruction times, fault isolation (shelf failures, shelf module failure, etc. affect only their RAID set), and the ability to rip out a bunch of disks without affecting other data.
You mean like 0,0+1,etc? I think you need to read some of the white papers :)
Mirroring would be nice. A double-drive failure is my biggest fear on a large RAID. Actually, that takes second place behind NVRAM failure. =8-{ ;-)
The "limitations" of the NetApp's are (IMHO) what make it so good. Not having to tweak and monitor 500 different parameters is most nice.
You might have a couple more parameters to support, but I think the added flexibility and redundancy (for those who need it and can afford it) are worth it.
At first HA sounds great. The ability to have 1 machine completely die and have another pick up where the first left off (without any percieved interuption of service). The problem shows itself when the cost associated with this level of availability is determined. If you have 99.5% availability, what is the $$$ associated with the additional 0.5%? Then the bean counters take over... :)
99.5% uptime means a 3.6-hour outage every month. That's not so hot. ;-) There are some applications where absolute 100% uptime is the goal. A Netapp still has a number of single points of failure that can cause a service outage: read cache RAM, CPU, NVRAM, shelf, motherboard, network interface, disk controller, etc. Granted, most of these faults cause only very short outages, but some companies want protection against every conceivable failure (or as close to it as technically feasible).
+--- In our lifetime, Brian Tao taob@nbc.netcom.ca wrote: | | Multiple RAID-4 sets mean multiple parity drives (and thus better | survivability for multi-disk failures on one Netapp), faster | reconstruction times, fault isolation (shelf failures, shelf module | failure, etc. affect only their RAID set), and the ability to rip out | a bunch of disks without affecting other data.
I understand the benefits. I just don't care for having 5 4GB slices which you have to manage (ala auspex).
Yes, being able to run snapshots for selected volume sets would be cool. No need to snapshot my logs :)
| Mirroring would be nice. A double-drive failure is my biggest | fear on a large RAID. Actually, that takes second place behind NVRAM | failure. =8-{ ;-)
Not having experience either of these in the 2+ years I have used NetApps, I have lost that fear. Perhaps it is naive.
| 99.5% uptime means a 3.6-hour outage every month. That's not so | hot. ;-) There are some applications where absolute 100% uptime is | the goal. A Netapp still has a number of single points of failure | that can cause a service outage: read cache RAM, CPU, NVRAM, shelf, | motherboard, network interface, disk controller, etc. Granted, most | of these faults cause only very short outages, but some companies want | protection against every conceivable failure (or as close to it as | technically feasible).
Absolutely. Financial institutions are among the first that come to mind as needing 100% uptime and fault-tolerance.
(knock on wood) my outages have been quite limited. And they usually last no more than 10 minutes (usually due to silly mistakes).
Alexei
On Thu, 15 Jan 1998, Alexei Rodriguez wrote:
I understand the benefits. I just don't care for having 5 4GB slices which you have to manage (ala auspex).
Ick... I'm thinking more along the lines of just two or three sets on an F630, each of which could span a few disk shelves. It certainly would be a nice option for those who could benefit from it.
Not having experience either of these in the 2+ years I have used NetApps, I have lost that fear. Perhaps it is naive.
No double-drive failures here yet, but I've had the NVRAM board die in one unit. Luckily it was during our in-house burn-in tests, so no production data was lost. Gave the floppy drive and the "wack" command a good workout though. ;-)
(knock on wood) my outages have been quite limited. And they usually last no more than 10 minutes (usually due to silly mistakes).
The worst we've had was a power supply failure on a shelf. Luckily it was only on the news spool, and no data was actually lost. Replacing the module only took a few minutes and the Netapp came right back up afterwards.
+--- In our lifetime, Brian Tao taob@nbc.netcom.ca wrote: | | Ick... I'm thinking more along the lines of just two or three sets | on an F630, each of which could span a few disk shelves. It certainly | would be a nice option for those who could benefit from it.
Indeed. My question for NetApp now is, are they looking at the 47GB drives? This is more out of curiosity than a demand for them. It seems that drives this large are too big a liability. Imagine how long it would take to rebuild?
| > (knock on wood) my outages have been quite limited. And they usually | > last no more than 10 minutes (usually due to silly mistakes). | | The worst we've had was a power supply failure on a shelf. | Luckily it was only on the news spool, and no data was actually lost. | Replacing the module only took a few minutes and the Netapp came right | back up afterwards.
We went with the additional power brick (in the old style shelves). While it is good that we can have dual PS, giving up a slot in each shelf has been quite painful.
Having seen what Sun's "vision" of the storage architecture will be, I am curious as to what NetApp's reaction will be to some of these upcoming threats. The A5000 is not a bad unit. Sun is making it clear that they will not shoot themselves in the foot with it; you cannot get the Fibre Channel adapter for anything smaller than an E3000 (these E450's are amazing boxes; a bit too heavy).
Alexei
On Thu, 15 Jan 1998, Alexei Rodriguez wrote:
Indeed. My question for NetApp now is, are they looking at the 47GB drives? This is more out of curiosity than a demand for them. It seems that drives this large are too big a liability. Imagine how long it would take to rebuild?
I don't know if I'd trust having that many drives of that size crammed into a DEC or Eurologics enclosure. Anyone know what the heat output is on one of those things? Coming from Seagate, I shudder to think how much air flow will be needed to dissipate the heat from constantly running drives...
We went with the additional power brick (in the old style shelves). While it is good that we can have dual PS, giving up a slot in each shelf has been quite painful.
Unfortunately, I can't do that with our news spools (filled right up with drives, as you can imagine). We're planning on doubling up the power supplies on our mail spool shelves, where we have lots of room for expansion.
Having seen what Sun's "vision" of the storage architecture will be, I am curious as to what NetApp's reaction will be to some of these upcoming threats. The A5000 is not a bad unit. Sun is making it clear that they will not shoot themselves in the foot with it; you cannot get the Fibre Channel adapter for anything smaller than an E3000 (these E450's are amazing boxes; a bit too heavy).
I can't imagine Netapp *not* offering a fiber-channel solution this year. More devices, more speed, more fault tolerance, longer loops, etc. BTW, Invincible's Lifeline NFS servers have some very nice HA features I'd like to see in a Netapp.
On Thu 15 Jan, 1998, Alexei Rodriguez alexei@cimedia.com wrote:
I understand the benefits. I just don't care for having 5 4GB slices which you have to manage (ala auspex).
I thought the idea of multiple RAID sets is that you layer one filesystem atop them all (if you want to. Of course, others want to have multiple filesystems.)
(Which is something you can, and people do, do with multiple RAID controllers and Unix systems, etc)
-- jrg.