Hello,
We need to increase the network badnwidth to our 740. It has the bulitin ethernet port plus a quad card. The switch it's connected to is a catalyst 5500. My understanding is the vif on the Netapp is compatible with Etherchannel on the catalysts. We did the vif create on the Netapp (DOT 5.3.4r3) and combined all 4 quad card ports into one interface, it has an IP address, ifconfig shows them all as part of a virtual interface. But what do I need to do on the Cisco side? When testing this on a 520 in the past, we tried enabling etherchannel AND trunking on the ports, to no avail -- all data appears to be going through the 4th interface. vif stats <vifname> shows about 3 or 4 packets per second on e3a through e3c, and thousands through e3d. I figured that maybe it just counts the entire vif's packets on the e3d interface; is that correct? Even so, though, with heavy speed testing from multiples clients, all the data continued to go through the last port, and never broke 12 megabytes a second. I really suspect this is because we don't have the Catalyst set up properly. Can someone out there who's done this before tell me what we need to do on the cat? Do I need to put the vif commands into the rc file? Thanks in advance,
Justin Acklin
Oh, another thing is that almost every time we plug or unplug an ethernet port on the netapp, or even sometimes doing an ifconfig, the thing core dumps. Very annoying, apparently there is an open bug on that, which is not fixed yet according the netapp engineer I opened a case with the last time this happened.
Hello,
We need to increase the network badnwidth to our 740. It has the bulitin ethernet port plus a quad card. The switch it's connected to is a catalyst 5500. My understanding is the vif on the Netapp is compatible with Etherchannel on the catalysts. We did the vif create on the Netapp (DOT 5.3.4r3) and combined all 4 quad card ports into one interface, it has an IP address, ifconfig shows them all as part of a virtual interface. But what do I need to do on the Cisco side? When testing this on a 520 in the past, we tried enabling etherchannel AND trunking on the ports, to no avail -- all data appears to be going through the 4th interface. vif stats <vifname> shows about 3 or 4 packets per second on e3a through e3c, and thousands through e3d. I figured that maybe it just counts the entire vif's packets on the e3d interface; is that correct? Even so, though, with heavy speed testing from multiples clients, all the data continued to go through the last port, and never broke 12 megabytes a second. I really suspect this is because we don't have the Catalyst set up properly. Can someone out there who's done this before tell me what we need to do on the cat? Do I need to put the vif commands into the rc file? Thanks in advance,
The cisco (unless they've changed something recently) has a limitation on how it sends packets across the links. Packets destined for the truck will be hashed onto a link by the last two bits of the hardware source address. That means if most of your traffic is coming from a single router port, it's all going to get hashed down to only one link.
Meanwhile, all NFS/CIFS/HTTP requests received by the Netapp are going to go back out the same link they entered on. So unless your incoming data can come from multiple source addresses, you're not going to be able to even things out.
The sun Trunking software had an option to create packets on links in a round-robin fashion. I've never seen a similar ability for cisco etherchannel to round-robin packets that way.
Arthur Darren Dunham wrote:
have the Catalyst set up properly. Can someone out there who's done this before tell me what we need to do on the cat? Do I need to put the vif commands into the rc file? Thanks in advance,
The cisco (unless they've changed something recently) has a limitation on how it sends packets across the links. Packets destined for the truck will be hashed onto a link by the last two bits of the hardware source address. That means if most of your traffic is coming from a single router port, it's all going to get hashed down to only one link.
The answer I received from NetApp, with a similar situation, was that the switch XORs the last two bits of source and destination MAC addresses, then uses that to choose the outgoing port within an EtherChannel trunk.
With only four clients making heavy use of the VIFs, MACs clustered. With hundreds of clients, you should see a more even distribution.
I built a truth table that showed then-current and desired states, then altered client MAC addresses.
Does that answer the question?
I solved the problem which was what spawned my original question -- what to do on the Cisco side, whether it's trunking or channeling. One single line command enabled channeling, it didn't work at first, but I turned it on and let it sit for a while and it started working. The reason we had had trouble with another netapp in the past was not because we had used the incorrect commands but that the Catalyst had an older blade in it that did not support etherchannel. When etherchannel is not setup all traffic apppears to go through the last port of the vif. Now, I still need to know whether the vifs persist through reboots or whether I need to place the commands into my rc file.
Justin Acklin
"Michael S. Keller" wrote:
Arthur Darren Dunham wrote:
have the Catalyst set up properly. Can someone out there who's done this before tell me what we need to do on the cat? Do I need to put the vif commands into the rc file? Thanks in advance,
The cisco (unless they've changed something recently) has a limitation on how it sends packets across the links. Packets destined for the truck will be hashed onto a link by the last two bits of the hardware source address. That means if most of your traffic is coming from a single router port, it's all going to get hashed down to only one link.
The answer I received from NetApp, with a similar situation, was that the switch XORs the last two bits of source and destination MAC addresses, then uses that to choose the outgoing port within an EtherChannel trunk.
With only four clients making heavy use of the VIFs, MACs clustered. With hundreds of clients, you should see a more even distribution.
I built a truth table that showed then-current and desired states, then altered client MAC addresses.
Does that answer the question?
-- Michael S. Keller, Technical Solutions Consultant, Sprint Enterprise Network Services On loan to Williams Communications Group Voice 918-574-6094, Amateur Radio N5RDV