On Thu, 24 Feb 2000, Bruce Sterling Woodcock wrote:
Yes, but with the 8MV NVRAM, you wait once. The key is the client doesn't have to wait for the final disk write, just the final write to NVRAM. So for a 10MB file:
8MB NVRAM Send 4MB (4s) Send 4MB (4s) Wait for 4MB write (4w) Send 2MB (2s) Total Time - 10s + 4w
4MB NVRAM Send 2MB (2s) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Totale Time - 10s + 6w
I like your example, let's bump it up to 100MB
8 MB NVRAM:
Send 4MB (4s) Send 4MB (4s) Wait for 4MB write (4w) Send 4MB (4s) Wait for 4MB write (4w) ... Send 4MB (4s) Total time = 100s + 92w
4 MB NVRAM
Send 2MB (2s) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) ... Send 2MB (2s) Total time = 100s + 96w
This is certainly not the 40% penalty you advertised. In fact as the size of the file increases the difference remains constant, i.e. 4w. In addition, if we talk about larger capacities of NVRAM and no pre-NVRAM caching the NFS requests may be dropped causing the client to retransmit. I agree with you that this is not the whole picture and that there is overhead we have not considered here, like interleaving, or writing small files/rewriting the same block, but simply saying that larger NVRAMs will necessarily significantly improve performance is a falacy. In the past I approached NetApp with a question whether they ever considered breaking up the NVRAM into smaller pieces creating more of a circular buffer. This would be the optimal solution if the overhead in doing this would not be significant. NetApp dismissed this idea, perhaps rightly so, perhaps not, claiming that it would make the code too complex. I'm NOT against larger NVRAMs/write caches, I'm for more granular NVRAMs.
Tom