----- Original Message ----- From: tkaczma@gryf.net Cc: toasters@mathworks.com Sent: Thursday, February 24, 2000 1:14 PM Subject: Re: NVRAM memory
Unless you are doing cp to cp when you're waiting for disk. This is exactly the scenario I painted. Think about it, at 100% performance, i.e. cp to cp the NVRAM will not be empty for long. If the size of the NVRAM is large then it will take more time to flush the cached data to disk before you have any more space in the NVRAM to put new stuff in. If the NVRAM is smaller then the waits will be shorter.
And the amount of data written is also smaller, so it evens out. Except, of course, the overhead for each wait state, the not-quite-context-switch for the Netapp, the backing off of the client, and the final client write (which does not have to wait on disk.)
An with a larger NVRAM you'll have to wait longer for it to become available. The performance will be more choppy. At 100% utilization smaller NVRAM may actually smooth out the performance.
I could believe this, but the final time will still be less, even if it is choppier. And since you're talking about one huge write, users won't notice the choppiness.
With adequate pre-NVRAM caching no requests have to be lost.
Like I said before, it's not a matter of requests being lost. If the filer stops responding, the client will back off sending requests. This is neither here nor there; I'm just pointing out you won't get that continuous throughput.
The NVRAM itself is not always directly utilized, but the size of the NVRAM dictates the size of the DRAM write cache, so the result is the same.
I mentioned this someplace, perhaps in a latter message.
Then you should have realized not to mention it here, because it's irrelevant. (Claiming the size of NVRAM is irrelevant since it's not directly utilized is missing the point.)
Look, if you don't believe me, feel free to take out half the NVRAM in your filer, write a 100MB file, and see if it takes more or less time.
Perhaps 100MB is not large enough for new filers which have substantially larger write caches/NVRAM.
With 32MB NVRAM in the current generation (not the next), it should be large enough. If you prefer, make it a 500MB file.
Your conclusion is spurious; rewriting the same block isn't the issue.
It is certainly an issue,
But it's not THE issue, which is whether or not general write performance is going to be faster with more NVRAM. It is, and not just in the "rewrite the same block" case as you were suggesting.
Bruce