On 08/31/99 14:10:34 you wrote:
>On Tue, 31 Aug 1999 sirbruce(a)ix.netcom.com wrote:
>
>> >If the difference is so great NetApp should be proud to publish
>> >their statistics without an NDA.
>>
>> If that were the case, every company should publish all of their internal
>> practices.
>
>We are definitely, not on the same channel. I said nothing about their
>practices. How they obtain the results was not a concern of mine. As
>long as they publish the results they can make claims based on them. If
>they don't publish such results any claims are just spin.
No, publishing results *without* saying how those results were obtained
is spin. By your suggestion, you have no problem if Auspex claims to get
10 million NFS ops @ 1.2ms. All they have to do is publish the data and
don't have to say how they got it.
But I suppose that's too general. We were talking specifically about
publishing certain reliability statistics. To my knowledge, Netapp has
done so, and claimed 99.99x% reliability. No, they didn't break it
down by each component. Given how other vendors treat the same issue,
I don't feel Netapp should produce more results to show that their
memory in particular is more reliable. I also feel that doing so would
only beg the question of how the results were obtained so someone else
could try to reproduce them, and that's when you get into publishing
internal practices.
>> So long as Netapp catches at least one bad
>> memory chip before it gets to the customer, my statement is strictly
>> true, no matter any data to the contrary.
>
>No, any claims that NetApp memory is significantly superior to others is
>not true in this case. One module out of thousands is hardly meaningfull
>statistical data.
I don't think I ever claimed 'significant'. I feel it is significant
enough to mention, but you questioned that it was FUD and that it ever
happened at all. I disproved that. As to exactly how many modules per
thousand is enough to be 'significant', that's for you to decide. If
you want the exact data, again, I suggest you ask Netapp, not me. I
can only tell you what I know to be true.
>> (Unless, I suppose, you can
>> claim the testing somehow damages memory that otherwise wouldn't fail.)
>
>Ahh, you read my mind. It is unlikely that they would damage a lot of
>memory during testing, but damaging one is certainly plausible. Such
>mishap would nullify a benefit of one module described above.
I don't think it's very plausible that the module could be damaged in
testing in such a way that it doesn't fail at Netapp, but will fail
at the customer site.
There is another side to this... a customer who does not load their
filers the 'right' way may be able to run fine with directly supplied
memory that normally would not survive Netapp testing. However, I do
not think this should be allowed to skew the data; the customer would
still see failures under the right circumstances. I don't think Netapp
should be faulted because they test their memory under conditions the
customer's filer may never experience, to ensure it's reliability under
those conditions.
>> Perhaps you live in some magical candyland of perfect management. Or
>> perhaps you are simply a beneficiary of this new age of low unemployment.
>> However, people have indeed been "axed" for buying failing memory (and
>> other parts) for the equipment vendor.
>
>That may be true, but I bet you there were a lot less (percentage wise) of
>those cases than cases where someone purchased third party components all
>other factors held the same.
I bet you there aren't, because those "other factors" *aren't* the same, and
the third party component really was less reliable.
>> Different curve but the same principle. While most of the failures do
>> follow the curve you describe, there are still the "out of spec" failures
>> like the memory ones that happen strictly because of loading, not burn-in
>> time. Many drives that fail in a Netapp can be used in your SCSI PC for
>> years without any problems, because they don't talk to the drive in the
>> same way.
>
>I don't disagree with you on this point in reference to drives. I never
>did. I bet that drives are still the most likely component to fail in a
>computer system.
That I don't know. My point is simply distribution curve aside, memory
has "out of spec" failures just like drives do, aside from "burn-in"
failures, and Netapp's testing catches the former for memory that is
not caught by Kingston, et al. Or at least, it *did* at one time.
>> Ahh, I see. You have personaly axe to grind against Netapp, so you
>> just want to toss in a snipe at every opportunity. Sorry, I thought
>> you were interested in a serious discussion.
>
>No, I simply stated that one of the drives failed. No big deal, this
>happens quite often with other manufacturers.
And why didn't you just simply state all the drives that didn't fail?
Don't play innocent; you weren't just making a casual remark.
>> No duh. Guess what - memory in groups must also behave in a way as not to
>> disturb the other memory (and other stuff going on on the motherboard).
>
>I don't think memory is as likely to influence other modules especially
>that the modules today are of relatively high density which means that
>you'll only have a couple of pieces of memory per system, number that
>is significantly smaller than drives.
I'm not sure; one could argue that although reduced in number, memory
is more tightly coupled to other memory than a drive is to another
drive. It all depends on exactly what the problem is you're talking
about. But in either case it's irrelevant; more likely or less, the
point is still true that these are issues for memory as well as disk,
and if you think Netapp testing catches those issues for disk, you
should accept Netapp testing catches those issues for memory.
>Now, since this is leading nowhere, it's time to end the polemics.
I agree.
Bruce