On 08/30/99 16:41:59 you wrote:
On Mon, 30 Aug 1999 sirbruce@ix.netcom.com wrote:
A cautionary note - much of the supposedly "in-spec" memory Kingston or other vendors sell is not capable of handling the strict timing and loading requirements of a Netapp filer.
This sounds like FUD. In fact it sounds like something Kingston would say about other memory vendors. Do you have data to substantiate this?
I'm sure Netapp might provide the data under NDA if you asked. I was speaking personally, however. I have *personally* seen tested Kingston memory come in and fail during internal testing on a filer. I have *personally* seen Netapp test filers they build for customers and have the memory in them fail and have to be replaced and re-tested before shipping to the customer. I have *personally* seen filers experience crashes and spontaneous reboots due to having the right memory in the right slots. I have *personally* seen the poor timing traces on oscilliscopes as EE's examines the problem.
I'm not saying don't buy memory from NetApp. We buy it there and will continue to buy it there because we want to maintain our service agreements. However, since you're making claims that memory sold by NetApp is superior please show me the data from an independent source.
How could an independent source exist? You ask the impossible. But if you could get the data from all customers and somehow verify the source of the memory in them, I think you'd see a difference. It just stands to reason. Kingston sells memory to Netapp that has a certain known possibility of failure. Kingston sells that same memory to you, the consumer. Netapp does additional testing and gets some memory to fail before selling it to you, the consumer. Statistically the memory you get from Netapp has to have a lower failure rate. If the testing didn't do anything, why would Netapp bother? The same goes for the disk drives.
Jeff Sloan has said they've now certified certain "direct from vendor" parts to be as good as Netapp supplied memory. My guess is either they have stepped down their internal testing, or the difference in failure rate has become too minimal to matter.
Bruce
On Mon, 30 Aug 1999 sirbruce@ix.netcom.com wrote:
Netapp does additional testing and gets some memory to fail before selling it to you, the consumer. Statistically the memory you get from Netapp has to have a lower failure rate.
As far as I remember memory failure follows Poisson distribution implying that failures are just as likely to occur at any time. Since memory failure rates are low and absolescence period is quite short memory companies can guarantee memory for lifetime. Virtually every large memory company carries a product with such warranty. I would be interested in the failure rate of "NetApp" memory in contrast to "normal" Kingston memory. If the difference is so great NetApp should be proud to publish their statistics without an NDA. I feel that NDAs, aside from pre-release announcements, are generally a way for companies to hide their shortcommings. General science is done by publishing and peer review not through secrets.
If the testing didn't do anything, why would Netapp bother?
Claims of memory testing value and memory superiority over other brands are greatly exaggerated by EVERY vendor. I've used inexpensive memory in systems that have been up for ages and expensive memory that failed miserably after several months in service. In the last year alone I replaced *GIGABYTES* (Really, I am NOT exaggerating) of what was supposed to be top quality memory certified by a large system vendor. "Our memory is much better than someone else's memory" - mostly bunk!
The primary purpose of purchasing memory from equipment vendors is the upkeep of warranties and service. It is also the cost of covering your behind. Chances of getting axed because you bought failing memory from the equipment vendor are next to nil. Chances of being axed if the memory happens to be third party even with the same or lower failure rate as OEM's are astronomically higher.
The same goes for the disk drives.
Hard drives follow a different curve. They are very likely to die at the very beginning and after a certain time in service. Burning in drives at the beginning of their life greatly increases the odds that surviving drives will have a low failure rate during the service period. The last thing you want to do is send a customer a drive that will fail in the first weeks of service. I've had those too even from NetApp, but one can always blame transportation even though the drives should withstand several G's of shock and they're usually tucked into globs of foam.
In addition, drives in arrays must behave in a way as not to disturb other drives.
Jeff Sloan has said they've now certified certain "direct from vendor" parts to be as good as Netapp supplied memory. My guess is either they have stepped down their internal testing, or the difference in failure rate has become too minimal to matter.
Exactly!
BTW, how many statisticians does NetApp employ to collect and thoroughly analyze their data? Is there an audit of the results similar to the scrutiny financial reports are given?
Tom
For your independent survey. With NetApp *supplied* memory in our F330 filers over a three year lease, we had one reboot where the engineering team wishes us to clean and reseat the memory. We had eleven F330 filers in production at one time and currently have four running now which are due on lease return this year. I will leave the statistics to you.
-gdg
sirbruce@ix.netcom.com wrote:
But if you could get the data from all customers and somehow verify the source of the memory in them, I think you'd see a difference. It just stands to reason. Kingston sells memory to Netapp that has a certain known possibility of failure. Kingston sells that same memory to you, the consumer. Netapp does additional testing and gets some memory to fail before selling it to you, the consumer. Statistically the memory you get from Netapp has to have a lower failure rate. If the testing didn't do anything, why would Netapp bother? The same goes for the disk drives.
Bruce