May,
One course I took at USENIX over a year ago, there was a comment from the system & network tuning course that said that it would take 3 CPUs (Sparc 300MHz) to saturate a GBIC. This was an unverified rule of thumb at the time. (I went back to my notes to dig this up). These days, you would have faster buses and faster CPUs, so adjust the number downward accordingly.
In the case of the mkfile that process is going to be designated to a single CPU. Just for fun, you might want to fire up 4 simultaneous 256m (both to tmp & to your filer) to see if you see performance differences. That way, you would be able to isolate some of the time to the limitations of your NFS client.
The back-to-back CPs are going to be in your sysstat output (use 1 sec updates for longer that 10 seconds). And that will identify time that the filer is spending on disk I/O. If you are getting back-to-back CPs, then your filer's NVRAM cache is filling faster than it can flush to disk. So, the filer makes the data stream wait. If you were to run the same single processor mkfile test with less data, you should see time to tmp would be proportional and if your time to filer is less than the earlier filer ratio (time/disk), then that might also show that it is a lack of NVRAM cache.
Jim -----Original Message----- From: Hsueh-Mei Tsai(May Tsai) [mailto:may-tsai@ti.com] Sent: Tuesday, January 22, 2002 12:38 PM To: toasters@mathworks.com Subject: Re: Gigabit ethernet performance
Thanks for all the replies.
Some background info: Sun client is a 420R running Solaris7, NetApp is F820 running 6.1.1R2. Both are PCI cards. I've checked the NOW sites and tried to change various Sun parameters with no significant differences.
Local disk creation takes 35 secs(single drive) vs. NetApp's 7 data/1 parity volume, which produces ~30secs results. Direct tmpfs writes takes 8 secs. Aggregated results in 27.5 seconds, still not much better though.
The NetApp support engineer I worked with mentioned it's back-to-back CP bottleneck, that's not on the network side, correct? Does that mean the Netapp F820 can't handle consistent huge data flows? As to Jumbo frame, I haven't tried it yet, the Gbit switch I use probably won't support it, well. Does it adds a lot to the performance?
May