I run over the intercluster quite frequently.  But the only time I have ever seen a problem with latency was when the lif was hosted on a FAS and the volume was on an AFF.  In those cases I will get double digit latency.

 

I use NFSv3 for most of my workloads.

 

The process you describe, using a delayed vol move, will work just fine.

 

Wayne

 

From: Toasters <toasters-bounces@teaparty.net> On Behalf Of Heino Walther
Sent: Tuesday, October 13, 2020 8:46 AM
To: tmac <tmacmd@gmail.com>
Cc: toasters@teaparty.net
Subject: Re: Migration to new cluster nodes...

 

  CAUTION: This email is from an external source. Do not click links or open attachments unless you recognize the sender and know the content is safe.

The length is not the issue.

 

The issue is that we have 4 large datastores (20TB+) that needs to be migrated… they all have snapmirror relations attached, so I would rather not use vmotion to a new datastore as I will have to reseed all the snapmirrors and because the destination is rotating aggregates we have no cross-volume dedupe benefits.   So… vmotion will only be used if everything else doesn’t work 😊

 

The problem of cause is that we will end up in a situation where data from the NFS datastores will be pulled via one node across the intercluster links to the node that holds the active volume and back again…

 

I seem to remember that you can do a “vol move” and hold the cut-over process…. I was thinking of doing the initial vol move (basically snapmirror) of the larger volumes, and then do the cut-over of all the volumes and move the LIF as one big “off-hours” process 😊.  Just to be sure…

 

I think I remember issues while running NFS over the cluster interconnect… basically high latency.

 

CIFS should not be a problem, and also ISCSI can be moved, just by adding LIFS to the target nodes, and MPIO will take care of the rest…  (this is mainly single servers with LUNs for SQL Server etc..)

 

But has anyone tried to run NFS datastores that uses the intercluster link?  Hos severe is the latency?   Would it make sense to add 4 intercluster links per node rather than two?

 

/Heino

 

 

Fra: tmac <tmacmd@gmail.com>
Dato: tirsdag den 13. oktober 2020 kl. 14.01
Til: Heino Walther <hw@beardmann.dk>
Cc: "toasters@teaparty.net" <toasters@teaparty.net>
Emne: Re: Migration to new cluster nodes...

 

Be sure on your distances.

I think you are allowed up to 300M on a fiber from filer to switch for 10G. If you are using patch panels, that length drops.

 

If your NFS is only vmware datastores, just create a new datastore and storage vmotion. Dont overthink it

 

If your NFS/CIFS is normal user data, Then you should be able to do a "vol move" from the old AFF to the new AFF and then provided networking is the same, you can also modify the home-node/port of the LIF and move it to the A300s with nearly no disruption.

 

For iSCSI, if you have full multipathing going, and the networking is the same on the old/new aff, then you could (ONE AT A TIME):

set adv

net int mod -vserv xxx -lif iscsi_lifa -status-admin down

net int mod  vserv xxx -lif iscsi_lifa -home-node a300-a -home-port new-port

net int mod -vserv xxx -lif iscsi_lifa -status-admin up

set admin

WAIT 8 minutes, let multipathing stabilize. Verify at the hosts all paths up. Do the next one.

 

--tmac

 

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam

I Blog at TMACsRack

 

 

 

On Tue, Oct 13, 2020 at 7:40 AM Heino Walther <hw@beardmann.dk> wrote:

Hi there

 

We have to migrate our existing AFF8080 which has NFSv3, ISCSI and CIFS data on it to a new location.

The plan is to add a temp A300 in the new location, and add it to the AFF8080’s cluster with two cluster switches (they are sub 200M apart)

ISCSI and CIFS should be fairly simple to migrate, but NFSv3 is IP-centric, so as we start the migration volume by volume, the data path for some of our vmware host will be extended somewhat…

It will go via the AFF8080’s configured IP address over the cluster network to the A300 controller that holds the volume, and back again.

Has anyone tried this with success?

Of cause another way would be to migrate the VMs from the VMWare side, which I would like to avoid if possible, but I am a bit worried about the added latency over the cluster network…

 

Should I be worried?

 

/Heino

 

 

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters