We currently have an A300 cluster, each node has 2x10GbE interfaces in a multimode_lacp ifgrp connected to 2xCisco 9K's. We are upgrading each node with 2x40GbE interfaces. The question is how best to go about doing the upgrade?
It would of course be nice if we could just add the 40GbE interfaces to the current port channels and ifgrp's, but I don't think the 9K's will allow us to mix speeds in a port channel. Perhaps it's possible on the 9K side to set the new interfaces to 10GbE, add them to the ifgrp's and port channels, drop the original 10GbE interfaces out and raise the speed on the new interfaces up to 40GbE. . .
Thanks in advance for any help.
--Carl
Add in the cards. Configure a new port-channel with 40Gb ports Add any VLANs to the new port channel Add the VLANs to the broadcast-domains as needed Migrate all LIFs off the the original ifgrp remove the 10Gb LIFs/ports from the broadcast domains
Make sense?
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Tue, Jun 15, 2021 at 2:22 PM Carl Howell chowell@uwf.edu wrote:
We currently have an A300 cluster, each node has 2x10GbE interfaces in a multimode_lacp ifgrp connected to 2xCisco 9K's. We are upgrading each node with 2x40GbE interfaces. The question is how best to go about doing the upgrade?
It would of course be nice if we could just add the 40GbE interfaces to the current port channels and ifgrp's, but I don't think the 9K's will allow us to mix speeds in a port channel. Perhaps it's possible on the 9K side to set the new interfaces to 10GbE, add them to the ifgrp's and port channels, drop the original 10GbE interfaces out and raise the speed on the new interfaces up to 40GbE. . .
Thanks in advance for any help.
--Carl
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Always excellent advice tmac :) one thing I also do before the migrate LIFs is create a new test LIF on the 40Gb port-channel to make sure all networking is setup correctly for the ifgrp and vlan. Then delete the test LIF and migrate. On Tuesday, June 15, 2021, 11:42:54 AM PDT, tmac tmacmd@gmail.com wrote:
Add in the cards.Configure a new port-channel with 40Gb portsAdd any VLANs to the new port channelAdd the VLANs to the broadcast-domains as neededMigrate all LIFs off the the original ifgrp remove the 10Gb LIFs/ports from the broadcast domains Make sense?
--tmac Tim McCarthy, Principal Consultant Proud Member of the #NetAppATeam
I Blog at TMACsRack
On Tue, Jun 15, 2021 at 2:22 PM Carl Howell chowell@uwf.edu wrote:
We currently have an A300 cluster, each node has 2x10GbE interfaces in a multimode_lacp ifgrp connected to 2xCisco 9K's. We are upgrading each node with 2x40GbE interfaces. The question is how best to go about doing the upgrade? It would of course be nice if we could just add the 40GbE interfaces to the current port channels and ifgrp's, but I don't think the 9K's will allow us to mix speeds in a port channel. Perhaps it's possible on the 9K side to set the new interfaces to 10GbE, add them to the ifgrp's and port channels, drop the original 10GbE interfaces out and raise the speed on the new interfaces up to 40GbE. . . Thanks in advance for any help. --Carl
_______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters _______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
It does. Thank you!
--Carl
On Tue, Jun 15, 2021 at 1:36 PM tmac tmacmd@gmail.com wrote:
Add in the cards. Configure a new port-channel with 40Gb ports Add any VLANs to the new port channel Add the VLANs to the broadcast-domains as needed Migrate all LIFs off the the original ifgrp remove the 10Gb LIFs/ports from the broadcast domains
Make sense?
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Tue, Jun 15, 2021 at 2:22 PM Carl Howell chowell@uwf.edu wrote:
We currently have an A300 cluster, each node has 2x10GbE interfaces in a multimode_lacp ifgrp connected to 2xCisco 9K's. We are upgrading each node with 2x40GbE interfaces. The question is how best to go about doing the upgrade?
It would of course be nice if we could just add the 40GbE interfaces to the current port channels and ifgrp's, but I don't think the 9K's will allow us to mix speeds in a port channel. Perhaps it's possible on the 9K side to set the new interfaces to 10GbE, add them to the ifgrp's and port channels, drop the original 10GbE interfaces out and raise the speed on the new interfaces up to 40GbE. . .
Thanks in advance for any help.
--Carl
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Best way to do this is one node at a time. I’m going with the presumption that the 40GbE cards are in slot 1 of the A300s.
First thing you would do is migrate all of your data LIFs from node1 to node2. You can do this with the migrate-all command:
cluster1::> network interface migrate-all -node node1
If you have iSCSI LIFs, you will have to down the LIFs and modify them to have node2 as the home node and respective ports:
cluster1::> network interface modify -vserver svm1 -lif iscsi1 -status-admin down cluster1::> network interface modify -vserver svm1 -lif iscsi1 -home-node node2 -home-port a0a
Additionally, if you have any intercluster LIFs, you will have to record their information so you can recreate them; presuming you don’t have another port on the local node to move to that would be valid.
Once all of the LIFs have been moved over to node2, remove the existing 10GbE ports from the interface group on node1 and add the new 40GbE ports:
cluster1::> network port ifgrp remove-port -node node1 -ifgrp a0a -port e0e cluster1::> network port ifgrp remove-port -node node1 -ifgrp a0a -port e0g cluster1::> network port ifgrp add-port -node node1 -ifgrp a0a -port e1a cluster1::> network port ifgrp add-port -node node1 -ifgrp a0a -port e1b
Validate your interface group status and begin reverting your LIFs back to node1:
cluster1::> network port ifgrp show cluster1::> network interface revert *
NOTE: You will have to manually modify any iSCSI LIFs and recreate any intercluster LIFs that you recorded.
Repeat for node2 and you should be good to go.
Regards, Andre M. Clark
From: Carl Howell chowell@uwf.edu chowell@uwf.edu Reply: Carl Howell chowell@uwf.edu chowell@uwf.edu Date: June 15, 2021 at 14:22:14 To: toasters@teaparty.net toasters@teaparty.net toasters@teaparty.net Subject: 10GbE to 40GbE Upgrade
We currently have an A300 cluster, each node has 2x10GbE interfaces in a multimode_lacp ifgrp connected to 2xCisco 9K's. We are upgrading each node with 2x40GbE interfaces. The question is how best to go about doing the upgrade?
It would of course be nice if we could just add the 40GbE interfaces to the current port channels and ifgrp's, but I don't think the 9K's will allow us to mix speeds in a port channel. Perhaps it's possible on the 9K side to set the new interfaces to 10GbE, add them to the ifgrp's and port channels, drop the original 10GbE interfaces out and raise the speed on the new interfaces up to 40GbE. . .
Thanks in advance for any help.
--Carl
_______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
I have used that method also. Both work very well.
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Tue, Jun 15, 2021 at 2:57 PM André M. Clark andre.m.clark@gmail.com wrote:
Best way to do this is one node at a time. I’m going with the presumption that the 40GbE cards are in slot 1 of the A300s.
First thing you would do is migrate all of your data LIFs from node1 to node2. You can do this with the migrate-all command:
cluster1::> network interface migrate-all -node node1
If you have iSCSI LIFs, you will have to down the LIFs and modify them to have node2 as the home node and respective ports:
cluster1::> network interface modify -vserver svm1 -lif iscsi1 -status-admin down cluster1::> network interface modify -vserver svm1 -lif iscsi1 -home-node node2 -home-port a0a
Additionally, if you have any intercluster LIFs, you will have to record their information so you can recreate them; presuming you don’t have another port on the local node to move to that would be valid.
Once all of the LIFs have been moved over to node2, remove the existing 10GbE ports from the interface group on node1 and add the new 40GbE ports:
cluster1::> network port ifgrp remove-port -node node1 -ifgrp a0a -port e0e cluster1::> network port ifgrp remove-port -node node1 -ifgrp a0a -port e0g cluster1::> network port ifgrp add-port -node node1 -ifgrp a0a -port e1a cluster1::> network port ifgrp add-port -node node1 -ifgrp a0a -port e1b
Validate your interface group status and begin reverting your LIFs back to node1:
cluster1::> network port ifgrp show cluster1::> network interface revert *
NOTE: You will have to manually modify any iSCSI LIFs and recreate any intercluster LIFs that you recorded.
Repeat for node2 and you should be good to go.
Regards, Andre M. Clark
From: Carl Howell chowell@uwf.edu chowell@uwf.edu Reply: Carl Howell chowell@uwf.edu chowell@uwf.edu Date: June 15, 2021 at 14:22:14 To: toasters@teaparty.net toasters@teaparty.net toasters@teaparty.net Subject: 10GbE to 40GbE Upgrade
We currently have an A300 cluster, each node has 2x10GbE interfaces in a multimode_lacp ifgrp connected to 2xCisco 9K's. We are upgrading each node with 2x40GbE interfaces. The question is how best to go about doing the upgrade?
It would of course be nice if we could just add the 40GbE interfaces to the current port channels and ifgrp's, but I don't think the 9K's will allow us to mix speeds in a port channel. Perhaps it's possible on the 9K side to set the new interfaces to 10GbE, add them to the ifgrp's and port channels, drop the original 10GbE interfaces out and raise the speed on the new interfaces up to 40GbE. . .
Thanks in advance for any help.
--Carl
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Thanks Scott and Andre!
--Carl
On Tue, Jun 15, 2021 at 1:59 PM tmac tmacmd@gmail.com wrote:
I have used that method also. Both work very well.
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Tue, Jun 15, 2021 at 2:57 PM André M. Clark andre.m.clark@gmail.com wrote:
Best way to do this is one node at a time. I’m going with the presumption that the 40GbE cards are in slot 1 of the A300s.
First thing you would do is migrate all of your data LIFs from node1 to node2. You can do this with the migrate-all command:
cluster1::> network interface migrate-all -node node1
If you have iSCSI LIFs, you will have to down the LIFs and modify them to have node2 as the home node and respective ports:
cluster1::> network interface modify -vserver svm1 -lif iscsi1 -status-admin down cluster1::> network interface modify -vserver svm1 -lif iscsi1 -home-node node2 -home-port a0a
Additionally, if you have any intercluster LIFs, you will have to record their information so you can recreate them; presuming you don’t have another port on the local node to move to that would be valid.
Once all of the LIFs have been moved over to node2, remove the existing 10GbE ports from the interface group on node1 and add the new 40GbE ports:
cluster1::> network port ifgrp remove-port -node node1 -ifgrp a0a -port e0e cluster1::> network port ifgrp remove-port -node node1 -ifgrp a0a -port e0g cluster1::> network port ifgrp add-port -node node1 -ifgrp a0a -port e1a cluster1::> network port ifgrp add-port -node node1 -ifgrp a0a -port e1b
Validate your interface group status and begin reverting your LIFs back to node1:
cluster1::> network port ifgrp show cluster1::> network interface revert *
NOTE: You will have to manually modify any iSCSI LIFs and recreate any intercluster LIFs that you recorded.
Repeat for node2 and you should be good to go.
Regards, Andre M. Clark
From: Carl Howell chowell@uwf.edu chowell@uwf.edu Reply: Carl Howell chowell@uwf.edu chowell@uwf.edu Date: June 15, 2021 at 14:22:14 To: toasters@teaparty.net toasters@teaparty.net toasters@teaparty.net Subject: 10GbE to 40GbE Upgrade
We currently have an A300 cluster, each node has 2x10GbE interfaces in a multimode_lacp ifgrp connected to 2xCisco 9K's. We are upgrading each node with 2x40GbE interfaces. The question is how best to go about doing the upgrade?
It would of course be nice if we could just add the 40GbE interfaces to the current port channels and ifgrp's, but I don't think the 9K's will allow us to mix speeds in a port channel. Perhaps it's possible on the 9K side to set the new interfaces to 10GbE, add them to the ifgrp's and port channels, drop the original 10GbE interfaces out and raise the speed on the new interfaces up to 40GbE. . .
Thanks in advance for any help.
--Carl
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters