On 08/31/99 14:10:34 you wrote:
No, publishing results *without* saying how those results were obtained is spin. By your suggestion, you have no problem if Auspex claims to get 10 million NFS ops @ 1.2ms. All they have to do is publish the data and don't have to say how they got it.
But I suppose that's too general. We were talking specifically about publishing certain reliability statistics. To my knowledge, Netapp has done so, and claimed 99.99x% reliability. No, they didn't break it down by each component. Given how other vendors treat the same issue, I don't feel Netapp should produce more results to show that their memory in particular is more reliable. I also feel that doing so would only beg the question of how the results were obtained so someone else could try to reproduce them, and that's when you get into publishing internal practices.
I don't think I ever claimed 'significant'. I feel it is significant enough to mention, but you questioned that it was FUD and that it ever happened at all. I disproved that. As to exactly how many modules per thousand is enough to be 'significant', that's for you to decide. If you want the exact data, again, I suggest you ask Netapp, not me. I can only tell you what I know to be true.
I don't think it's very plausible that the module could be damaged in testing in such a way that it doesn't fail at Netapp, but will fail at the customer site.
There is another side to this... a customer who does not load their filers the 'right' way may be able to run fine with directly supplied memory that normally would not survive Netapp testing. However, I do not think this should be allowed to skew the data; the customer would still see failures under the right circumstances. I don't think Netapp should be faulted because they test their memory under conditions the customer's filer may never experience, to ensure it's reliability under those conditions.
I bet you there aren't, because those "other factors" *aren't* the same, and the third party component really was less reliable.
That I don't know. My point is simply distribution curve aside, memory has "out of spec" failures just like drives do, aside from "burn-in" failures, and Netapp's testing catches the former for memory that is not caught by Kingston, et al. Or at least, it *did* at one time.
And why didn't you just simply state all the drives that didn't fail? Don't play innocent; you weren't just making a casual remark.
I'm not sure; one could argue that although reduced in number, memory is more tightly coupled to other memory than a drive is to another drive. It all depends on exactly what the problem is you're talking about. But in either case it's irrelevant; more likely or less, the point is still true that these are issues for memory as well as disk, and if you think Netapp testing catches those issues for disk, you should accept Netapp testing catches those issues for memory.
Now, since this is leading nowhere, it's time to end the polemics.
I agree.
Bruce
On Tue, 31 Aug 1999 sirbruce@ix.netcom.com wrote:
And why didn't you just simply state all the drives that didn't fail? Don't play innocent; you weren't just making a casual remark.
I WAS making just a casual remark. I am more impressed with what netapp did that any other vendor so far, workstation, server, or otherwise. I am only professionally involved with NetApp, i.e. I'm not a stockholder. I am very critical, of my own work and of someone else's. I think that bears more value than patting someone on the back. For a company pats on the back come in the form of orders and growth of their stock value.
Tom