On Mon, 30 Aug 1999 sirbruce@ix.netcom.com wrote:
Netapp does additional testing and gets some memory to fail before selling it to you, the consumer. Statistically the memory you get from Netapp has to have a lower failure rate.
As far as I remember memory failure follows Poisson distribution implying that failures are just as likely to occur at any time. Since memory failure rates are low and absolescence period is quite short memory companies can guarantee memory for lifetime. Virtually every large memory company carries a product with such warranty. I would be interested in the failure rate of "NetApp" memory in contrast to "normal" Kingston memory. If the difference is so great NetApp should be proud to publish their statistics without an NDA. I feel that NDAs, aside from pre-release announcements, are generally a way for companies to hide their shortcommings. General science is done by publishing and peer review not through secrets.
If the testing didn't do anything, why would Netapp bother?
Claims of memory testing value and memory superiority over other brands are greatly exaggerated by EVERY vendor. I've used inexpensive memory in systems that have been up for ages and expensive memory that failed miserably after several months in service. In the last year alone I replaced *GIGABYTES* (Really, I am NOT exaggerating) of what was supposed to be top quality memory certified by a large system vendor. "Our memory is much better than someone else's memory" - mostly bunk!
The primary purpose of purchasing memory from equipment vendors is the upkeep of warranties and service. It is also the cost of covering your behind. Chances of getting axed because you bought failing memory from the equipment vendor are next to nil. Chances of being axed if the memory happens to be third party even with the same or lower failure rate as OEM's are astronomically higher.
The same goes for the disk drives.
Hard drives follow a different curve. They are very likely to die at the very beginning and after a certain time in service. Burning in drives at the beginning of their life greatly increases the odds that surviving drives will have a low failure rate during the service period. The last thing you want to do is send a customer a drive that will fail in the first weeks of service. I've had those too even from NetApp, but one can always blame transportation even though the drives should withstand several G's of shock and they're usually tucked into globs of foam.
In addition, drives in arrays must behave in a way as not to disturb other drives.
Jeff Sloan has said they've now certified certain "direct from vendor" parts to be as good as Netapp supplied memory. My guess is either they have stepped down their internal testing, or the difference in failure rate has become too minimal to matter.
Exactly!
BTW, how many statisticians does NetApp employ to collect and thoroughly analyze their data? Is there an audit of the results similar to the scrutiny financial reports are given?
Tom