Another pretty major difference between LS and DP methods;
DP method requires manual intervention when a failover/restore is needed.
LS Mirrors are running in parallel and incoming reads/access requests (other than NFSv4) hit the LS mirrors rather than the source volume, so if one fails, you don’t have to do anything right away; you’d just need to resolve the issue at
some point, but no interruption to service.
LS mirrors can also have a schedule to run to avoid needing to update them regularly. And, if you need to write to the SVM root for some reason, you’d need to access the .admin path in the vsroot; LS mirrors are readonly (like DP mirrors).
From: Toasters <toasters-bounces@teaparty.net> On Behalf Of
André M. Clark
Sent: Wednesday, April 28, 2021 9:29 AM
To: toasters@teaparty.net; John Stoffel <john@stoffel.org>
Subject: Re: Root volume LS mirror best practices?
NetApp Security WARNING: This is an external email. Do not click links or open attachments unless
you recognize the sender and know the content is safe. |
There are two ways to accomplish protecting NAS SVM root volumes: Data Protection Mirrors (DP) or Load-Sharing Mirrors (LS). Original best practices recommended these mirrors
of the SVM root volume should be placed on every node, including the node where the source volume resides, in the cluster. However, more recent documentation indicates that you can do a single LS mirror (see
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.pow-dap/GUID-9FD2F1A0-7261-4BDD-AD87-EA500E6D583A.html?cp=8_3_6_0).
While both DP and LS mirrors accomplish the same end result of protecting the SVM root volume, there are some operational differences that you must be aware of when using
them.
The first difference, if the decision is to use LS mirrors, is an extra step that must be performed to update the namespace. Changes with shares, exports or new volumes
will not be immediately reflected in the namespace. Updating a LS mirror set is required if you want these namespace changes reflected and visible to all clients.
Another key difference between DP and LS mirrors are LS mirrors do not work for NFSv4. In general, I recommend the use of DP mirrors instead of LS mirrors only if the cluster
is licensed for SnapMirror.
The procedure for restoring a SVM root volume from either a DP or LS mirror is also well documented.
HTH,
André
From: John Stoffel <john@stoffel.org>
Reply: John Stoffel <john@stoffel.org>
Date: April 23, 2021 at 12:54:29
To: toasters@teaparty.net
<toasters@teaparty.net>
Subject: Root volume LS mirror best practices?
Guys,
We're getting ready to do a DC move, so I'm working to setup all my
SnapMirror relationships to hopefully mirror all my data between my
two pairs (all in one cluster right now). The idea being that if one
pair doesn't make the move, we have our data elsewhere. No, we don't
have any swing gear, the system is out of Netapp support, and we do
have alot of spare space to mirror with. Whew. :-)
So I found the Netapp documents on "SVM Root Volume Protection
Express Guide" from 2016 and I've started to implement it here,
because if the worse case happens and one pair doesn't make it through
the move due to damage or loss or catastrophe, I need to be able to
bring up as much data as possible, as quickly as possible.
https://library.netapp.com/ecm/ecm_download_file/ECMLP2496241
I do have some questions for you all, to see what other people are
doing, and what are current best practices and real life issues to
watch out for.
First, I did change my setup so that I mirror every 5 minutes, instead
of hourly as given in the quick guide. Since we use Netbackup to make
clones of volumes to backup, that should hopefully work ok, but I
worry that now I need to add in a:
snapmirror update-ls-set -source-vserver <vs> -source-volume rootvol
after I create a new clone, to make sure that the backup server will
see this clone volume and junction path. Has anyone run into this
issue and is your workaround to just do the update by hand, or do you
just wait for the change to replicate?
Second, this document recommends one LS mirror per-head in a cluster.
With a four node cluster, this seems a little crazy, especially since
I've never had any problems when doing failovers during upgrades in
the past. My main goal is to just have the mirrors on seperate
pairs. What do people think? Is one rootvol and one rootvol_m1
mirror per VServer enough?
Third, if I go with just one LS mirror per-pair, I think it looks like I
really need to make two mirrors, one for the source pair, and one for
the opposite pair. Which seems like a pain to manage, increases the
number of volumes and SM relationships to manage, etc. Since my main
focus is NOT performance, since it's been working just fine without LS
mirrors in the past, is just having one LS mirror good enough?
Cheers,
John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters