Rick,
I've seen and looked for it too. We got the response that jumbo have not become a standard yet, so this is one reason why not supporting it. I agree with you, especially since other vendors do support it nowadays...
Anyway, I once found a document from '98 stating the (then future) support for it, but sorrily, mohan@netapp.com is no longer there.... :(
Netapp-peers, could you help us with supporting it ? It seems so obvious and needed...
Eyal.
***************************************************************************************
"Richard L. Rhodes" wrote:
On 24 Apr 2000, at 14:36, Derek kelly wrote:
The question this raises with me is- I wonder if going to Gigabit between the devices negates the speed issue.
I can't speak about Sybase, but I'm running Oracle across Gigabit. In terms of the pipes size, I've never come close to 100MB/s. My F740 filers seem to top out at around 20MB/s. I "think" it's helping, in that there is less latency on the connection. But the problem I see mostly is high biod utilization under high i/o load (I'm running across nfs). Expecially things like creating tablespaces. On a 4 processor system (rs/6k-f50) I've seen biod run as high as 30% of the system. My disappointment is that NetApp doesn't supported gigabit jumbo packets, even though they announced they would support them quite a while ago when they first announced support for gigabit. I notice that those announcement letters are no longer available on their web site. A search for "jumbo" turns up nothing, where it used to turn up an announcement letter for Alteon nics and jumbo packets. Every db block (4k) thats read from the db requires 2.x ethernet packets. Under low utilization I never see the nfs load, under high utilization it becomes a monster. If Netapp is serious about wanting to run databases on the NetApp they need to support jumbo packets.
Rick