Yes, I have 32Mb of RAM, but at 50% or 10 seconds, the NVRAM flushes. I only buffer 16Mb before it takes a data dump *grin* if I understand correctly.
You do understand correctly, but while the "flush" is occuring (actually, it's called a consistency point, or CP for short), the other 16 MB of NVRAM is still being put to good use journaling new file system modifications being made by clients. In other words, you and your filer are enjoying a full 32 MB of "benefit" from the NVRAM. In other words, it is *all* being used for something. There is no waste here! :-)
In a clustered environment, I just use that other half of NVRAM to buffer to the other box instead of mirroring myself in case of a write failure (power, etc).
In an unclustered environment, you are using that "other half" of NVRAM now. When you cluster two systems together though, well... we have a white paper that tells you what happens much better than I can here:
http://www.netapp.com/technology/level3/1004.html
Two clients, both on separate 100Mb interfaces, creating spool files for my news services using 'dd if/dev/zero of=<filename> count=<x>'.
Hmmm. Benchmarks don't get much more write intensive than this, do they. :-)
So large sequential writes then. Fair enough.
When I was seeing about 1300 NFSops/Sec and 18-20MegaBytes of input
traffic,
the CPU on the F760 would peg.
Yes. Writing is a more CPU intensive function than reading. Lots of shunting things into NVRAM and performing of RAID calculations etc... None of that needs to be done on reads.
Anyway, a *sustained* agregate of ~20 MBytes/Sec write performance sounds excellent to me. You only have ~25 MBytes/Sec worth of pipes coming into the filer (2 x 100Mbits/Sec), and the thing is stashing things away onto disks, with RAID, at 80% of the theoretical maximum rate at which data can arrive at its doorstep from the network. Nope... nothing to be ashamed of there I'm afraid. The F760 is a beast and a half! ;-)
Keith