On 08/30/99 23:39:25 you wrote:
On Mon, 30 Aug 1999 sirbruce@ix.netcom.com wrote:
Netapp does additional testing and gets some memory to fail before selling it to you, the consumer. Statistically the memory you get from Netapp has to have a lower failure rate.
As far as I remember memory failure follows Poisson distribution implying that failures are just as likely to occur at any time. Since memory failure rates are low and absolescence period is quite short memory companies can guarantee memory for lifetime. Virtually every large memory company carries a product with such warranty.
This fact doesn't change the fact that some memory produced is not up to specifications. The memory may only "fail" under certain loading and timing constraints. For many manufacturers, this simply means it gets qualified and sold at a lower speed (high ns) much like CPUs. The point is the testing they do in-house is often not sufficiently thorough to duplicate the loading conditions of Netapp's requirements. I don't think Netapp's requirements are "out of spec" either; they are simply strict.
I would be interested in the failure rate of "NetApp" memory in contrast to "normal" Kingston memory. If the difference is so great NetApp should be proud to publish their statistics without an NDA. I feel that NDAs, aside from pre-release announcements, are generally a way for companies to hide their shortcommings.
If that were the case, every company should publish all of their internal practices. The fact is they don't. So long as that continues to be the business climate, Netapp would be foolish to put themselves at a disadvantage. The NDA is also there to prevent competitors from finding out how Netapp does things so well and then copying it.
General science is done by publishing and peer review not through secrets.
Netapp is not in the business of doing general science, they are in the business of making money. I, as an investor, am quite happy they limit their "science" to only those things that will help revenue generation. If they feel doing a report on their memory testing will do so, great. If they do not, great. I have confidence in their management.
If the testing didn't do anything, why would Netapp bother?
Claims of memory testing value and memory superiority over other brands are greatly exaggerated by EVERY vendor. I've used inexpensive memory in systems that have been up for ages and expensive memory that failed miserably after several months in service. In the last year alone I replaced *GIGABYTES* (Really, I am NOT exaggerating) of what was supposed to be top quality memory certified by a large system vendor. "Our memory is much better than someone else's memory" - mostly bunk!
So you have had some bad experiences... I never claimed otherwise. But your particular bad luck (or perhaps, poor choice of vendors) does not disqualify my experience. So long as Netapp catches at least one bad memory chip before it gets to the customer, my statement is strictly true, no matter any data to the contrary. (Unless, I suppose, you can claim the testing somehow damages memory that otherwise wouldn't fail.)
The primary purpose of purchasing memory from equipment vendors is the upkeep of warranties and service. It is also the cost of covering your behind. Chances of getting axed because you bought failing memory from the equipment vendor are next to nil.
Perhaps you live in some magical candyland of perfect management. Or perhaps you are simply a beneficiary of this new age of low unemployment. However, people have indeed been "axed" for buying failing memory (and other parts) for the equipment vendor.
Chances of being axed if the memory happens to be third party even with the same or lower failure rate as OEM's are astronomically higher.
I disagree.
The same goes for the disk drives.
Hard drives follow a different curve. They are very likely to die at the very beginning and after a certain time in service. Burning in drives at the beginning of their life greatly increases the odds that surviving drives will have a low failure rate during the service period. The last thing you want to do is send a customer a drive that will fail in the first weeks of service.
Different curve but the same principle. While most of the failures do follow the curve you describe, there are still the "out of spec" failures like the memory ones that happen strictly because of loading, not burn-in time. Many drives that fail in a Netapp can be used in your SCSI PC for years without any problems, because they don't talk to the drive in the same way.
I've had those too even from NetApp, but one can always blame transportation even though the drives should withstand several G's of shock and they're usually tucked into globs of foam.
Ahh, I see. You have personaly axe to grind against Netapp, so you just want to toss in a snipe at every opportunity. Sorry, I thought you were interested in a serious discussion.
In addition, drives in arrays must behave in a way as not to disturb other drives.
No duh. Guess what - memory in groups must also behave in a way as not to disturb the other memory (and other stuff going on on the motherboard).
Jeff Sloan has said they've now certified certain "direct from vendor" parts to be as good as Netapp supplied memory. My guess is either they have stepped down their internal testing, or the difference in failure rate has become too minimal to matter.
Exactly!
Regardless, I stand by my statement as having been true until whichever of the events above ocurred, which had to have been within the past year or two.
BTW, how many statisticians does NetApp employ to collect and thoroughly analyze their data? Is there an audit of the results similar to the scrutiny financial reports are given?
Again, you could probably find out such information with an NDA, if they are willing to give it. There are certainly people in customer support responsible for tracking reliability data and breaking it down by components, filer, OS, etc.
Bruce
On Tue, 31 Aug 1999 sirbruce@ix.netcom.com wrote:
If the difference is so great NetApp should be proud to publish their statistics without an NDA.
If that were the case, every company should publish all of their internal practices.
We are definitely, not on the same channel. I said nothing about their practices. How they obtain the results was not a concern of mine. As long as they publish the results they can make claims based on them. If they don't publish such results any claims are just spin.
So long as Netapp catches at least one bad memory chip before it gets to the customer, my statement is strictly true, no matter any data to the contrary.
No, any claims that NetApp memory is significantly superior to others is not true in this case. One module out of thousands is hardly meaningfull statistical data.
(Unless, I suppose, you can claim the testing somehow damages memory that otherwise wouldn't fail.)
Ahh, you read my mind. It is unlikely that they would damage a lot of memory during testing, but damaging one is certainly plausible. Such mishap would nullify a benefit of one module described above.
Perhaps you live in some magical candyland of perfect management. Or perhaps you are simply a beneficiary of this new age of low unemployment. However, people have indeed been "axed" for buying failing memory (and other parts) for the equipment vendor.
That may be true, but I bet you there were a lot less (percentage wise) of those cases than cases where someone purchased third party components all other factors held the same.
Different curve but the same principle. While most of the failures do follow the curve you describe, there are still the "out of spec" failures like the memory ones that happen strictly because of loading, not burn-in time. Many drives that fail in a Netapp can be used in your SCSI PC for years without any problems, because they don't talk to the drive in the same way.
I don't disagree with you on this point in reference to drives. I never did. I bet that drives are still the most likely component to fail in a computer system.
Ahh, I see. You have personaly axe to grind against Netapp, so you just want to toss in a snipe at every opportunity. Sorry, I thought you were interested in a serious discussion.
No, I simply stated that one of the drives failed. No big deal, this happens quite often with other manufacturers.
No duh. Guess what - memory in groups must also behave in a way as not to disturb the other memory (and other stuff going on on the motherboard).
I don't think memory is as likely to influence other modules especially that the modules today are of relatively high density which means that you'll only have a couple of pieces of memory per system, number that is significantly smaller than drives.
Now, since this is leading nowhere, it's time to end the polemics.
Tom