On Thu, 24 Feb 2000, Bruce Sterling Woodcock wrote:
Yes necessarily true. The amount of time you have to wait depends on how much you can cache; the disk may be slower, but if I have more NVRAM, then the time it takes for me to fill up, the less time I have to wait until the disk write completes (if I have to wait at all).
Unless you are doing cp to cp when you're waiting for disk. This is exactly the scenario I painted. Think about it, at 100% performance, i.e. cp to cp the NVRAM will not be empty for long. If the size of the NVRAM is large then it will take more time to flush the cached data to disk before you have any more space in the NVRAM to put new stuff in. If the NVRAM is smaller then the waits will be shorter.
Also, once you start filling up, your writes won't be "continuous" because the client will start backing off when the filer stops responding.
An with a larger NVRAM you'll have to wait longer for it to become available. The performance will be more choppy. At 100% utilization smaller NVRAM may actually smooth out the performance. With adequate pre-NVRAM caching no requests have to be lost.
The NVRAM itself is not always directly utilized, but the size of the NVRAM dictates the size of the DRAM write cache, so the result is the same.
I mentioned this someplace, perhaps in a latter message.
Look, if you don't believe me, feel free to take out half the NVRAM in your filer, write a 100MB file, and see if it takes more or less time.
Perhaps 100MB is not large enough for new filers which have substantially larger write caches/NVRAM.
Your conclusion is spurious; rewriting the same block isn't the issue.
It is certainly an issue, if you rewrite the same block over and over, you'll be overwriting a small area of write cache, thus leading to small writes, but at the same time doing tons of cp to cp because, as I understand it, the NVRAM records the transaction, not the outcome.
Tom