Happy Friday
We're looking to make a change to the production cluster running 300 VM's including Oracle RAC ahead of the weekend full backups (which are bottlenecking on an edge HP switch - 1000 discards/sec at the peak on this 10Gb port) So we want to run
ifgrp favor <standby interface>
to balance the traffic out and avoid the discards
Do NFS clients (all our VMware datastores are presented via NFS vFilers) notice any issues when the standby interface (e1b) is favored over the active one (e1a)?
thanks
I tested this on the standby (snapmirror destination) cluster - interesting findings (This is 8.1.2 7-mode on 3270 BTW) -
pre-status (e1a is up and e1b is standby - I truncated the non-relevant stats for readability):
na04> ifgrp status default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log' na04-vif0: 1 link, transmit 'none', Ifgrp Type 'single_mode' fail 'default' Ifgrp Status Up Addr_set up: e1a: state up, since 29May2013 11:19:37 (51+09:39:49) mediatype: auto-10g_sr-fd-up flags: enabled down: e1b: state down, since 29May2013 11:19:38 (51+09:39:48) mediatype: auto-10g_sr-fd-up flags: enabled
na04> ifgrp favor Fri Jul 19 21:00:00 PDT [na04:kern.uptime.filer:info]: 9:00pm up 51 days, 9:40 43403 NFS ops, 0 CIFS ops, 0 HTTP ops, 0 FCP ops, 0 iSCSI ops e1b na04> Fri Jul 19 21:00:11 PDT [na04:kern.cli.cmd:debug]: Command line input: the command is 'ifgrp'. The full command line is 'ifgrp favor e1b'.
Check status (e1a AND e1b are BOTH UP - e1b "favored"):
na04> ifgrp status default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log' na04-vif0: 2 links, transmit 'none', Ifgrp Type 'single_mode' fail 'default' Ifgrp Status Up Addr_set up: e1b: state up, since 19Jul2013 21:00:11 (00:00:06) mediatype: auto-10g_sr-fd-up flags: enabled favored e1a: state up, since 29May2013 11:19:37 (51+09:40:40) mediatype: auto-10g_sr-fd-up flags: enabled na04> Fri Jul 19 21:00:42 PDT [na04:pvif.switchLink:warning]: na04-vif0: switching to e1b snapmirror status
42 seconds after the favor cmd the switch is made - e1b is now "down"
na04> ifgrp status default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log' na04-vif0: 1 link, transmit 'none', Ifgrp Type 'single_mode' fail 'default' Ifgrp Status Up Addr_set up: e1b: state up, since 19Jul2013 21:00:11 (00:14:30) mediatype: auto-10g_sr-fd-up flags: enabled favored down: e1a: state down, since 19Jul2013 21:00:42 (00:13:59) mediatype: auto-10g_sr-fd-up flags: enabled
I repeated this test on the partner 3270 node - it took 31 seconds to report "switching to e1b"
What is happening during the transition?
From the commandline and snapmirror perspective this was non-disruptive (but there are no NFS clients running from the standby cluster) - a constant ping did not record any dropped packets or extra latency.
thanks to "Sto Rage© netbacker@gmail.com" for reminding me I could at least test this on the standby cluster.
On Jul 19, 2013, at 9:53 AM, Fletcher Cocquyt fcocquyt@stanford.edu wrote:
Happy Friday
We're looking to make a change to the production cluster running 300 VM's including Oracle RAC ahead of the weekend full backups (which are bottlenecking on an edge HP switch - 1000 discards/sec at the peak on this 10Gb port) So we want to run
ifgrp favor <standby interface>
to balance the traffic out and avoid the discards
Do NFS clients (all our VMware datastores are presented via NFS vFilers) notice any issues when the standby interface (e1b) is favored over the active one (e1a)?
thanks
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
If you have spanning tree enabled for ports to which filer is connected it may take at least 30 seconds until port will transition to forwarding state. Try configuring ports as edge ports (exact syntax and name varies depending on switch manufacturer).
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Fletcher Cocquyt Sent: Saturday, July 20, 2013 8:28 AM To: Toasters Lists Subject: Tested: How non-disruptive is ifgrp favor <standby interface>
I tested this on the standby (snapmirror destination) cluster - interesting findings (This is 8.1.2 7-mode on 3270 BTW) -
pre-status (e1a is up and e1b is standby - I truncated the non-relevant stats for readability):
na04> ifgrp status default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log' na04-vif0: 1 link, transmit 'none', Ifgrp Type 'single_mode' fail 'default' Ifgrp Status Up Addr_set up: e1a: state up, since 29May2013 11:19:37 (51+09:39:49) mediatype: auto-10g_sr-fd-up flags: enabled
down: e1b: state down, since 29May2013 11:19:38 (51+09:39:48) mediatype: auto-10g_sr-fd-up flags: enabled
na04> ifgrp favor Fri Jul 19 21:00:00 PDT [na04:kern.uptime.filer:info]: 9:00pm up 51 days, 9:40 43403 NFS ops, 0 CIFS ops, 0 HTTP ops, 0 FCP ops, 0 iSCSI ops e1b na04> Fri Jul 19 21:00:11 PDT [na04:kern.cli.cmd:debug]: Command line input: the command is 'ifgrp'. The full command line is 'ifgrp favor e1b'.
Check status (e1a AND e1b are BOTH UP - e1b "favored"):
na04> ifgrp status default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log' na04-vif0: 2 links, transmit 'none', Ifgrp Type 'single_mode' fail 'default' Ifgrp Status Up Addr_set up: e1b: state up, since 19Jul2013 21:00:11 (00:00:06) mediatype: auto-10g_sr-fd-up flags: enabled favored
e1a: state up, since 29May2013 11:19:37 (51+09:40:40) mediatype: auto-10g_sr-fd-up flags: enabled
na04> Fri Jul 19 21:00:42 PDT [na04:pvif.switchLink:warning]: na04-vif0: switching to e1b snapmirror status
42 seconds after the favor cmd the switch is made - e1b is now "down"
na04> ifgrp status default: transmit 'IP Load balancing', Ifgrp Type 'multi_mode', fail 'log' na04-vif0: 1 link, transmit 'none', Ifgrp Type 'single_mode' fail 'default' Ifgrp Status Up Addr_set up: e1b: state up, since 19Jul2013 21:00:11 (00:14:30) mediatype: auto-10g_sr-fd-up flags: enabled favored
down: e1a: state down, since 19Jul2013 21:00:42 (00:13:59) mediatype: auto-10g_sr-fd-up flags: enabled
I repeated this test on the partner 3270 node - it took 31 seconds to report "switching to e1b"
What is happening during the transition?
From the commandline and snapmirror perspective this was non-disruptive (but there are no NFS clients running from the standby cluster) - a constant ping did not record any dropped packets or extra latency.
thanks to "Sto Rage© <netbacker@gmail.commailto:netbacker@gmail.com>" for reminding me I could at least test this on the standby cluster.
On Jul 19, 2013, at 9:53 AM, Fletcher Cocquyt <fcocquyt@stanford.edumailto:fcocquyt@stanford.edu> wrote:
Happy Friday
We're looking to make a change to the production cluster running 300 VM's including Oracle RAC ahead of the weekend full backups (which are bottlenecking on an edge HP switch - 1000 discards/sec at the peak on this 10Gb port) So we want to run
ifgrp favor <standby interface>
to balance the traffic out and avoid the discards
Do NFS clients (all our VMware datastores are presented via NFS vFilers) notice any issues when the standby interface (e1b) is favored over the active one (e1a)?
thanks
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters