We're doing a full move, everything must go! type of event. And I don't think I'll have to space to replicate everything between pairs. We're a smaller org now, so it's hard to justify swing gear cost. But it's also hard to get people to clean up. :-)
John
Sebastian> just to add to Heino: I had a student in one of my courses Sebastian> just lately, that moved his datacenter some 4-5 km with Sebastian> exactly this setup (4 cluster switches and good redundant Sebastian> connectivity) and some swing gear, if I remember correctly, Sebastian> because he didn't have enough capacity to completely Sebastian> evacuate a pair before the move. Completely non-disruptive, Sebastian> nobody noticed anything...
Sebastian> On Tue, 2 Mar 2021, 19:36 Heino Walther hw@beardmann.dk wrote:
Sebastian> Hi John
Sebastian>
Sebastian> I’m not sure if this helps…
Sebastian> I am also sure you can get the cluster running with half the nodes.
Sebastian> I am actually migrating between two HA-pairs as we speak.
Sebastian> We choose to setup a link between the nodes with two cluster switches, so the two HA-pairs is Sebastian> about 500M apart… (I’m not sure when latency becomes an issue)
Sebastian> But what we did was to add the two new nodes (temporary nodes) to the existing cluster, and we Sebastian> are then able to use the “vol move” operation to move volumes from one HA-pair to another, and Sebastian> of cause also the LIFs.
Sebastian> Works a charm so far. Huge NFS datastores have been moved with a hitch.
Sebastian> We did a “vol move start” with the “-cutover-action wait” option, which does the mirroring but Sebastian> waits until we tell it to do the cut-over… (in a service window)
Sebastian> There is however some “dedupe processes” which makes the cut-over very slow on larger volumes… Sebastian> it keeps telling us that it is waiting for a dedupe process to complete… (both systems are Sebastian> AFFs)…. But after 10-30 minutes it completes OK.
Sebastian>
Sebastian> Once we have emptied the source HA-nodes, we will move them to the new DC, and do it all over Sebastian> again back to the original system again…
Sebastian>
Sebastian> So far no down time at all, which is nice 😊
Sebastian>
Sebastian> I realize that you may not be a lucky as to where you have to move the systems 😉
Sebastian> So drive safely, and if you are running spinning disks, be prepared to replace a few as you Sebastian> startup the system 😉
Sebastian>
Sebastian> /Heino
Sebastian>
Sebastian> Fra: Toasters toasters-bounces@teaparty.net på vegne af John Stoffel john@stoffel.org Sebastian> Dato: tirsdag, 2. marts 2021 kl. 19.24 Sebastian> Til: toasters@teaparty.net toasters@teaparty.net Sebastian> Emne: Moving a 4 node cluster in two pairs?
Sebastian> Guys, Sebastian> We're getting ready to move our 4 node FAS8060 cluster to a new data Sebastian> center. As part of our due diligence, we're thinking that we would Sebastian> snapmirror the most critical business volumes between the two pairs.
Sebastian> The idea would be that if the truck holding pair A+B doesn't make it Sebastian> for some reason, we can still bring up the cluster with nodes C+D and Sebastian> still have those snapmirrored volumes available to continue working.
Sebastian> So my questions are:
Sebastian> 1. Can I boot a cluster with half the nodes missing? I'm sure I Sebastian> can...
Sebastian> 2. Has anyone else had to do this half assed method of shipping DR?
Sebastian> Cheers, Sebastian> John Sebastian> _______________________________________________ Sebastian> Toasters mailing list Sebastian> Toasters@teaparty.net Sebastian> https://www.teaparty.net/mailman/listinfo/toasters
Sebastian> _______________________________________________ Sebastian> Toasters mailing list Sebastian> Toasters@teaparty.net Sebastian> https://www.teaparty.net/mailman/listinfo/toasters