I started playing with single-mode VIF's on 5.3.4 today (F740, QFE NIC installed in slot 3). I have e3a connected to one Cisco 2924XL and e3b connected to a different 2924XL. Both switches are bridged together so that it essentially looks like a 48-port switch to the hosts connected to it.
adm-na5> vif create single na5-vif1 e3a e3b adm-na5> vif favor e3b adm-na5> ifconfig na5-vif1 10.35.8.21 netmask 255.255.255.128 up
I created a single-mode trunk with e3a and e3b, favouring e3b (no particular reason, other than to see that the filer won't assume the first interface in the "vif" command will be favoured), ifconfigged it, and verified that it was pingable. I started up a few scripts to record any NFS outages while I played with the failover. Link failure was simulated by unplugging the filer's cable from both the switch end and the filer end.
"vif status" and "vif stat" both verified that all traffic was passing over the e3b link exclusively. I yanked out the e3b cable from the switch:
Virtual interface (trunk) na5-vif1 e3a e3b Pkts In Pkts Out Pkts In Pkts Out 294767 13473 856994 38899 0 0 3095 141 0 0 3236 149 0 0 2403 104 Thu Nov 11 23:34:20 GMT [de2]: de_main: e3b : Link down. Check cable. 2527 122 0 0 3737 162 0 0 3209 141 0 0 3551 156 0 0
Instantaneous failover! I was worried that the Catalysts would spend 20 or 30 seconds doing loop detection, but nothing even blinks at the loss of one of the links. Hurrah. :) But then I plug e3b back in, give it time to settle (at this point, the switch does do a recalc), and then unplug e3a to force a "give back" to e3b, I observe about 15 seconds where no traffic is passed over either link:
Virtual interface (trunk) na5-vif1 e3a e3b Pkts In Pkts Out Pkts In Pkts Out 569739 25859 873099 39627 5558 239 0 0 3137 147 0 0 3550 162 0 0 Thu Nov 11 23:35:40 GMT [de1]: de_main: e3a : Link down. Check cable. 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 0 1 1 0 0 1 1 0 0 1 1 0 0 2347 103 0 0 1 1 0 0 1 1 0 0 1450 64 0 0 4857 206 0 0 3807 163 0 0 3983 174
NFS clients definitely notice that. Is this normal for the Netapp, or should I be looking at my switch configuration? Also, is there a way to force the Netapp to automatically switch back to the favoured link when it comes back up (perhaps after a short timeout, in case the interface is bouncing up and down)?
Hi Brian,
have you set the switch-ports the filer is attached to "port fast enabled"? Without setting it on the switch tries to build its spanning tree if you plugin a cable.
Oliver
-----Original Message----- From: owner-dl-toasters@netapp.com [mailto:owner-dl-toasters@netapp.com]On Behalf Of Brian Tao Sent: Freitag, 12. November 1999 01:52 To: toasters@mathworks.com Cc: grant.berry@netapp.com Subject: Single-mode vif failover reaction time
I started playing with single-mode VIF's on 5.3.4 today (F740, QFE NIC installed in slot 3). I have e3a connected to one Cisco 2924XL and e3b connected to a different 2924XL. Both switches are bridged together so that it essentially looks like a 48-port switch to the hosts connected to it.
adm-na5> vif create single na5-vif1 e3a e3b adm-na5> vif favor e3b adm-na5> ifconfig na5-vif1 10.35.8.21 netmask 255.255.255.128 up
I created a single-mode trunk with e3a and e3b, favouring e3b (no particular reason, other than to see that the filer won't assume the first interface in the "vif" command will be favoured), ifconfigged it, and verified that it was pingable. I started up a few scripts to record any NFS outages while I played with the failover. Link failure was simulated by unplugging the filer's cable from both the switch end and the filer end.
"vif status" and "vif stat" both verified that all traffic was passing over the e3b link exclusively. I yanked out the e3b cable from the switch:
Virtual interface (trunk) na5-vif1 e3a e3b Pkts In Pkts Out Pkts In Pkts Out 294767 13473 856994 38899 0 0 3095 141 0 0 3236 149 0 0 2403 104 Thu Nov 11 23:34:20 GMT [de2]: de_main: e3b : Link down. Check cable. 2527 122 0 0 3737 162 0 0 3209 141 0 0 3551 156 0 0
Instantaneous failover! I was worried that the Catalysts would spend 20 or 30 seconds doing loop detection, but nothing even blinks at the loss of one of the links. Hurrah. :) But then I plug e3b back in, give it time to settle (at this point, the switch does do a recalc), and then unplug e3a to force a "give back" to e3b, I observe about 15 seconds where no traffic is passed over either link:
Virtual interface (trunk) na5-vif1 e3a e3b Pkts In Pkts Out Pkts In Pkts Out 569739 25859 873099 39627 5558 239 0 0 3137 147 0 0 3550 162 0 0 Thu Nov 11 23:35:40 GMT [de1]: de_main: e3a : Link down. Check cable. 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 0 1 1 0 0 1 1 0 0 1 1 0 0 2347 103 0 0 1 1 0 0 1 1 0 0 1450 64 0 0 4857 206 0 0 3807 163 0 0 3983 174
NFS clients definitely notice that. Is this normal for the Netapp, or should I be looking at my switch configuration? Also, is there a way to force the Netapp to automatically switch back to the favoured link when it comes back up (perhaps after a short timeout, in case the interface is bouncing up and down)? -- Brian Tao (BT300, taob@risc.org) "Though this be madness, yet there is method in't"
On Fri, 12 Nov 1999, Oliver Krause wrote:
have you set the switch-ports the filer is attached to "port fast enabled"? Without setting it on the switch tries to build its spanning tree if you plugin a cable.
Yes, but the recalculation is done when link is detected (i.e., when the cable is physically plugged back in), not when packets start flowing over it. I waited long enough (i.e., over 1 minute) after plugging cables back in before inducing a failover or a giveback. Failover always happens instantaneously, but giveback always takes 15+ seconds before I start seeing traffic. I'll have to try out more combinations in the lab next week.