Hi Heino,
I have used this migration scenario in setups where there was a distance of a couple of kilometers between two datacenters and did not run into any issues, even though this was an unsupported setup. I was doing vol moves while data serving was also progressively using more and more cluster interconnect bandwidth.
You will still be in a supported setup so I don’t think you will run into a lot of issues. Some things that you can take into consideration to mitigate risks:
- you can limit the number of concurrent vol moves by lowering the vol move governor setting - see the TR mentioned here: https://community.netapp.com/t5/ONTAP-Discussions/How-many-vol-move-operatio... - you can move the most I/O bound datastores first and then once they are cut over move over their data serving LIF(s) to limit traffic on the cluster backend. - you can monitor latency in your VMware environment and abort vol moves if you are running into issues.
Lastly, maybe not all that important but you don’t want to have your cluster switches go down while you are in your stretched 4-node cluster setup. In my unsupported scenario we had 2 switches on each side, 4 in total with ISLs between them. Like I said, this was not a supported setup bit we were confident enough to get the migration done that was without a whole lot of hassle.
Regards, Filip
On Tue, 13 Oct 2020 at 16:48, Heino Walther hw@beardmann.dk wrote:
The length is not the issue.
The issue is that we have 4 large datastores (20TB+) that needs to be migrated… they all have snapmirror relations attached, so I would rather not use vmotion to a new datastore as I will have to reseed all the snapmirrors and because the destination is rotating aggregates we have no cross-volume dedupe benefits. So… vmotion will only be used if everything else doesn’t work 😊
The problem of cause is that we will end up in a situation where data from the NFS datastores will be pulled via one node across the intercluster links to the node that holds the active volume and back again…
I seem to remember that you can do a “vol move” and hold the cut-over process…. I was thinking of doing the initial vol move (basically snapmirror) of the larger volumes, and then do the cut-over of all the volumes and move the LIF as one big “off-hours” process 😊. Just to be sure…
I think I remember issues while running NFS over the cluster interconnect… basically high latency.
CIFS should not be a problem, and also ISCSI can be moved, just by adding LIFS to the target nodes, and MPIO will take care of the rest… (this is mainly single servers with LUNs for SQL Server etc..)
But has anyone tried to run NFS datastores that uses the intercluster link? Hos severe is the latency? Would it make sense to add 4 intercluster links per node rather than two?
/Heino
*Fra: *tmac tmacmd@gmail.com *Dato: *tirsdag den 13. oktober 2020 kl. 14.01 *Til: *Heino Walther hw@beardmann.dk *Cc: *"toasters@teaparty.net" toasters@teaparty.net *Emne: *Re: Migration to new cluster nodes...
Be sure on your distances.
I think you are allowed up to 300M on a fiber from filer to switch for 10G. If you are using patch panels, that length drops.
If your NFS is only vmware datastores, just create a new datastore and storage vmotion. Dont overthink it
If your NFS/CIFS is normal user data, Then you should be able to do a "vol move" from the old AFF to the new AFF and then provided networking is the same, you can also modify the home-node/port of the LIF and move it to the A300s with nearly no disruption.
For iSCSI, if you have full multipathing going, and the networking is the same on the old/new aff, then you could (ONE AT A TIME):
set adv
net int mod -vserv xxx -lif iscsi_lifa -status-admin down
net int mod vserv xxx -lif iscsi_lifa -home-node a300-a -home-port new-port
net int mod -vserv xxx -lif iscsi_lifa -status-admin up
set admin
WAIT 8 minutes, let multipathing stabilize. Verify at the hosts all paths up. Do the next one.
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at **TMACsRack https://tmacsrack.wordpress.com/*
On Tue, Oct 13, 2020 at 7:40 AM Heino Walther hw@beardmann.dk wrote:
Hi there
We have to migrate our existing AFF8080 which has NFSv3, ISCSI and CIFS data on it to a new location.
The plan is to add a temp A300 in the new location, and add it to the AFF8080’s cluster with two cluster switches (they are sub 200M apart)
ISCSI and CIFS should be fairly simple to migrate, but NFSv3 is IP-centric, so as we start the migration volume by volume, the data path for some of our vmware host will be extended somewhat…
It will go via the AFF8080’s configured IP address over the cluster network to the A300 controller that holds the volume, and back again.
Has anyone tried this with success?
Of cause another way would be to migrate the VMs from the VMWare side, which I would like to avoid if possible, but I am a bit worried about the added latency over the cluster network…
Should I be worried?
/Heino
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters