Jeff...
I'm glad your routing anomilies got sorted out. Regarding your subsequent mail...
Also, Ive noticed a max input rate of about 20Mb/sec into the filter, then the CPU tops out on my F760.
I understand that the NVRAM can only process 16Mb at a time before the system really gums up, but I did expect a little more input performance.
Does anyone have any official or un-official comments on this?
If your F760 is still of the unclustered variety (which I'm sure it is, as we haven't quite yet shipped the F700 series cluster release of Data ONTAP) then you will currently be enjoying the full 32 MB capacity of your NVRAM. In a clustered pair of filers however, each systems NVRAM is mirrored to the other, meaning that each filer's individual *effective* NVRAM capacity for the file system(s) that they are individually journaling is halved to 16 MB. They each still physically have 32 MB of NVRAM of course.
On the performance front, I'm afraid you would need to better stipulate precisely what you mean by a "20 Mb/sec input rate" for me or anyone else at NetApp to be able to make a useful comment on whether it looks about right or not. How are you determining this number? Is this MBytes/Sec or MBits/Sec? How many clients are you using, and through what networks are they talking to the filer? What software are you running? What is that software really doing on the I/O front? How are the timings being made? Etc...
Keith
I understand that the NVRAM can only process 16Mb at a time before the system really gums up, but I did expect a little more input performance.
If your F760 is still of the unclustered variety ... then you will currently be enjoying the full 32 MB capacity of your NVRAM.
He's probably referring to the fact that we split NVRAM into two halves, with one being drained while the other is written to. A Consistency Point (CP) creation is forced if you fill up the active half of the NVRAM. If the previous CP has not completed, you're hosed until it does, so in some sense he's correct that, for an F760 or any other filer with 32MB of NVRAM, "the NVRAM can only process 16Mb at a time." Well, 16MB, not 16Mb.
For a more complete description of how this all works, see TR-3002, section 3.5 (http://www.netapp.com/technology/level3/3002.html#I35).
-- Karl Swartz - Technical Marketing Engineer Network Appliance Work: kls@netapp.com http://www.netapp.com/ Home: kls@chicago.com http://www.chicago.com/~kls/
Also, Ive noticed a max input rate of about 20Mb/sec into the filter, then the CPU tops out on my F760.
I understand that the NVRAM can only process 16Mb at a time before the system really gums up, but I did expect a little more input performance.
If your F760 is still of the unclustered variety (which I'm sure it is, as we haven't quite yet shipped the F700 series cluster release of Data ONTAP) then you will currently be enjoying the full 32 MB capacity of your NVRAM. In a clustered pair of filers however, each systems NVRAM is mirrored to the other, meaning that each filer's individual *effective* NVRAM capacity for the file system(s) that they are individually journaling is halved to 16 MB. They each still physically have 32 MB of NVRAM of course. ---
Yes, I have 32Mb of RAM, but at 50% or 10 seconds, the NVRAM flushes. I only buffer 16Mb before it takes a data dump *grin* if I understand correctly. In a clustered environment, I just use that other half of NVRAM to buffer to the other box instead of mirroring myself in case of a write failure (power, etc).
On the performance front, I'm afraid you would need to better stipulate precisely what you mean by a "20 Mb/sec input rate" for me or anyone else at NetApp to be able to make a useful comment on whether it looks about right or not. How are you determining this number? Is this MBytes/Sec or MBits/Sec? How many clients are you using, and through what networks are they talking to the filer? What software are you running? What is that software really doing on the I/O front? How are the timings being made? Etc... ---
Two clients, both on separate 100Mb interfaces, creating spool files for my news services using 'dd if/dev/zero of=<filename> count=<x>'. Hardware was dual Sparc250s with dual 300Mhz procs, 1Gb RAM, Solaris 5.6 Generic_105181-07.
Both clients were local to the box on a Catylist switch entering in on separate interfaces on the F760.
When I was seeing about 1300 NFSops/Sec and 18-20MegaBytes of input traffic, the CPU on the F760 would peg.
NVRAM writes took place every single second of the 'sysstat 1' on the console with no more than 16 or maybe on occasion 17Megabytes written per flush. (This is where I made the assumption that NVRAM buffering/flushing was the limiting factor on overall performance FROM the network)
Both clients were nfs2/udp.
Hi Jeff, Keith,
I can second that. I just did a test with an F760, dual quad cards, with 2 x 100BaseT Interfaces in use. The client was a big sun enterprise 6500, with 10 CPU's 3Gig of RAM and two quad 100BT cards also, again only 2 ports connected so far.
We did a simple transfer (cp -rp * filer:/sybase) of a 24GB sybase data file to the filer. The Sun client was able to fully (or thereabouts) saturate the ONE 100BaseT interface, doing about 10-12MB/s input rate, using a sysstat 1 . The number of nfs ops/s was about 1300 and the CPU ranged from about 75-90% . The connection was done via nfs V2/udp .
We didn't try to start a second copy on the 2nd interface, as CPU % was relatively high. According to our benchmarks we should be able to scale to about 7500 ops/s with 10ms or better response time. Seeing this, also makes me wonder, whether this should be expected behaviour ?
We're up against Sun here and the main criteria is performance!! I suspect NVRAM also being the bottleneck here.
Would anyone like to comment ? (Keith, you may pass this to our internal maillists).
thanx
Mike.
Jeff Mohler wrote:
Also, Ive noticed a max input rate of about 20Mb/sec into the filter, then the CPU tops out on my F760.
I understand that the NVRAM can only process 16Mb at a time before the system really gums up, but I did expect a little more input performance.
If your F760 is still of the unclustered variety (which I'm sure it is, as we haven't quite yet shipped the F700 series cluster release of Data ONTAP) then you will currently be enjoying the full 32 MB capacity of your NVRAM. In a clustered pair of filers however, each systems NVRAM is mirrored to the other, meaning that each filer's individual *effective* NVRAM capacity for the file system(s) that they are individually journaling is halved to 16 MB. They each still physically have 32 MB of NVRAM of course.
Yes, I have 32Mb of RAM, but at 50% or 10 seconds, the NVRAM flushes. I only buffer 16Mb before it takes a data dump *grin* if I understand correctly. In a clustered environment, I just use that other half of NVRAM to buffer to the other box instead of mirroring myself in case of a write failure (power, etc).
On the performance front, I'm afraid you would need to better stipulate precisely what you mean by a "20 Mb/sec input rate" for me or anyone else at NetApp to be able to make a useful comment on whether it looks about right or not. How are you determining this number? Is this MBytes/Sec or MBits/Sec? How many clients are you using, and through what networks are they talking to the filer? What software are you running? What is that software really doing on the I/O front? How are the timings being made? Etc...
Two clients, both on separate 100Mb interfaces, creating spool files for my news services using 'dd if/dev/zero of=<filename> count=<x>'. Hardware was dual Sparc250s with dual 300Mhz procs, 1Gb RAM, Solaris 5.6 Generic_105181-07.
Both clients were local to the box on a Catylist switch entering in on separate interfaces on the F760.
When I was seeing about 1300 NFSops/Sec and 18-20MegaBytes of input traffic, the CPU on the F760 would peg.
NVRAM writes took place every single second of the 'sysstat 1' on the console with no more than 16 or maybe on occasion 17Megabytes written per flush. (This is where I made the assumption that NVRAM buffering/flushing was the limiting factor on overall performance FROM the network)
Both clients were nfs2/udp.