I am having a great deal of difficulty getting my filer's 10g interfaces connected to a pair of 3750E switches I am trying to use for my storage network's backbone. I can inconsistently ping other objects on the network (there really is nothing there yet, we are just getting started, but I do have a host and an RLM card just for troubleshooting purposes), but they cannot ping each other. From a remote host (172.1.2.5) I can ping everything but array1. If I bring down either of the two stacked switches (which brings down one port of each VIF member pairs) everything works.
array01> ping 172.1.0.1
172.1.0.1 is alive
array01> ping 172.1.0.2
array01> ping 172.1.0.4
ping: wrote 172.1.0.4 64 chars, error=Host is down
ping: wrote 172.1.0.4 64 chars, error=Host is down
array01> Thu May 15 13:28:33 EDT [gvr-array01: nis_worker_0:info]: Local NIS group update successful.
array01> ping 172.1.2.5
172.1.2.5 is alive
array02> ping 172.1.0.1
array02> ping 172.1.0.4
172.1.0.4 is alive
array02> ping 172.1.0.4
172.1.0.4 is alive
array02> ping 172.1.0.250
172.1.0.250 is alive
array02> ping 172.1.2.5
no answer from 172.1.2.5
array01> ifconfig -a
e0a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:08:22:b7 (auto-1000t-fd-up) flowcontrol full
trunked lan0
e0b: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:08:22:b6 (auto-unknown-cfg_down) flowcontrol full
e0c: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:08:22:b7 (auto-1000t-fd-up) flowcontrol full
trunked lan0
e0d: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:08:22:b4 (auto-unknown-cfg_down) flowcontrol full
e2a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:08:22:b6 (auto-10g_sr-fd-up) flowcontrol full
trunked tgif
e2b: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:08:22:b6 (auto-10g_sr-fd-up) flowcontrol full
trunked tgif
lo: flags=1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160
inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1
ether 00:00:00:00:00:00 (VIA Provider)
lan0: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
inet 10.28.17.213 netmask 0xffffff00 broadcast 10.28.17.255
partner 10.28.17.214 (not in use)
ether 02:a0:98:08:22:b7 (Enabled virtual interface)
tgif: flags=4948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
inet 172.1.0.1 netmask 0xffff0000 broadcast 172.1.255.255
partner 172.1.0.2 (not in use)
ether 02:a0:98:08:22:b6 (Enabled virtual interface)
nfo enabled
array02>ifconfig -a
e0a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:0c:1a:37 (auto-1000t-fd-up) flowcontrol full
trunked lan0
e0b: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:0c:1a:36 (auto-unknown-cfg_down) flowcontrol full
e0c: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:0c:1a:37 (auto-1000t-fd-up) flowcontrol full
trunked lan0
e0d: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:0c:1a:34 (auto-unknown-cfg_down) flowcontrol full
e2a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:0c:1a:36 (auto-10g_sr-fd-up) flowcontrol full
trunked tgif
e2b: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:0c:1a:36 (auto-10g_sr-fd-up) flowcontrol full
trunked tgif
lo: flags=1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160
inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1
ether 00:00:00:00:00:00 (VIA Provider)
lan0: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
inet 10.28.17.214 netmask 0xffffff00 broadcast 10.28.17.255
partner 10.28.17.213 (not in use)
ether 02:a0:98:0c:1a:37 (Enabled virtual interface)
tgif: flags=4948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
inet 172.1.0.2 netmask 0xffff0000 broadcast 172.1.255.255
partner 172.1.0.1 (not in use)
ether 02:a0:98:0c:1a:36 (Enabled virtual interface)
nfo enabled
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
Only time I have seen stuff like that is when it's in multimode on the filer, and etherchannel is not properly enabled on the switch.
________________________________
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Page, Jeremy Sent: Thursday, May 15, 2008 1:32 PM To: toasters@mathworks.com Subject: VIF weirdness
I am having a great deal of difficulty getting my filer's 10g interfaces connected to a pair of 3750E switches I am trying to use for my storage network's backbone. I can inconsistently ping other objects on the network (there really is nothing there yet, we are just getting started, but I do have a host and an RLM card just for troubleshooting purposes), but they cannot ping each other. From a remote host (172.1.2.5) I can ping everything but array1. If I bring down either of the two stacked switches (which brings down one port of each VIF member pairs) everything works.
array01> ping 172.1.0.1
172.1.0.1 is alive
array01> ping 172.1.0.2
array01> ping 172.1.0.4
ping: wrote 172.1.0.4 64 chars, error=Host is down
ping: wrote 172.1.0.4 64 chars, error=Host is down
array01> Thu May 15 13:28:33 EDT [gvr-array01: nis_worker_0:info]: Local NIS group update successful.
array01> ping 172.1.2.5
172.1.2.5 is alive
array02> ping 172.1.0.1
array02> ping 172.1.0.4
172.1.0.4 is alive
array02> ping 172.1.0.4
172.1.0.4 is alive
array02> ping 172.1.0.250
172.1.0.250 is alive
array02> ping 172.1.2.5
no answer from 172.1.2.5
array01> ifconfig -a
e0a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:08:22:b7 (auto-1000t-fd-up) flowcontrol full
trunked lan0
e0b: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:08:22:b6 (auto-unknown-cfg_down) flowcontrol full
e0c: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:08:22:b7 (auto-1000t-fd-up) flowcontrol full
trunked lan0
e0d: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:08:22:b4 (auto-unknown-cfg_down) flowcontrol full
e2a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:08:22:b6 (auto-10g_sr-fd-up) flowcontrol full
trunked tgif
e2b: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:08:22:b6 (auto-10g_sr-fd-up) flowcontrol full
trunked tgif
lo: flags=1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160
inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1
ether 00:00:00:00:00:00 (VIA Provider)
lan0: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
inet 10.28.17.213 netmask 0xffffff00 broadcast 10.28.17.255
partner 10.28.17.214 (not in use)
ether 02:a0:98:08:22:b7 (Enabled virtual interface)
tgif: flags=4948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
inet 172.1.0.1 netmask 0xffff0000 broadcast 172.1.255.255
partner 172.1.0.2 (not in use)
ether 02:a0:98:08:22:b6 (Enabled virtual interface)
nfo enabled
array02>ifconfig -a
e0a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:0c:1a:37 (auto-1000t-fd-up) flowcontrol full
trunked lan0
e0b: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:0c:1a:36 (auto-unknown-cfg_down) flowcontrol full
e0c: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:0c:1a:37 (auto-1000t-fd-up) flowcontrol full
trunked lan0
e0d: flags=108042<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 00:a0:98:0c:1a:34 (auto-unknown-cfg_down) flowcontrol full
e2a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:0c:1a:36 (auto-10g_sr-fd-up) flowcontrol full
trunked tgif
e2b: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
ether 02:a0:98:0c:1a:36 (auto-10g_sr-fd-up) flowcontrol full
trunked tgif
lo: flags=1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160
inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1
ether 00:00:00:00:00:00 (VIA Provider)
lan0: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
inet 10.28.17.214 netmask 0xffffff00 broadcast 10.28.17.255
partner 10.28.17.213 (not in use)
ether 02:a0:98:0c:1a:37 (Enabled virtual interface)
tgif: flags=4948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
inet 172.1.0.2 netmask 0xffff0000 broadcast 172.1.255.255
partner 172.1.0.1 (not in use)
ether 02:a0:98:0c:1a:36 (Enabled virtual interface)
nfo enabled
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
I am migrating from an aggregate to another aggregate in the same filer (6030). The LUNS are attached to print clusters, 2 node and 3 node clusters (WIndows 2003 sp2 x86 etc).
I have snapmirrored the volumes. If I stop the cluster service on the windows boxes, do a snapmirror update, the disconnected the quorum and other resource disks, do a snapmirror update, release the snapmirror, remove and replace the disks with snapdrive with the same disk letters. fire up the cluster service.. Will that work? has anyone done that before? Any gotchas?
Any help is greatly appreciate!
You will not have issues. And you don't need to update your mirror. If you simply power off you clusters no i/o activity will be performer on LUNs. Then you can simply copy (vol copy) the volume(s) containing your LUNs, rename the new volumes as the old (in case of iSCSI LUNs and if you don't want to recreate the connections) and restart you clusters.
In case your nodes (but it will not happen) will loose the LUNs with Snapdrive you can safely reconnect them starting with the quorum one, on first node then restarting the cluster service, then on the second node and so on with all other cluster shared LUNs
Regards
-----Messaggio originale----- Da: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] Per conto di Klise, Steve Inviato: venerdì 16 maggio 2008 15.50 A: toasters@mathworks.com Oggetto: MSCS and snapmirror question
I am migrating from an aggregate to another aggregate in the same filer (6030). The LUNS are attached to print clusters, 2 node and 3 node clusters (WIndows 2003 sp2 x86 etc).
I have snapmirrored the volumes. If I stop the cluster service on the windows boxes, do a snapmirror update, the disconnected the quorum and other resource disks, do a snapmirror update, release the snapmirror, remove and replace the disks with snapdrive with the same disk letters. fire up the cluster service.. Will that work? has anyone done that before? Any gotchas?
Any help is greatly appreciate!