A few general rules of thumb to go by:
  1. Load-Sharing mirrors only ever benefit NAS volumes. SAN volumes have no need to be mounted into a namespace to function.
  2. Best to create ONE LS mirror on each data aggregate in the cluster
    1. Sometimes this is nuts, like having 8 aggregates. At that point, I might defy the practice and maybe do one per node
    2. Looks like it has been a while since I looked at those docs. It does show one per node now. When!
    3. I personally have never had to utilize the LS mirror. It is tiny (1G per node per data SVM). 
  3. It is really not needed to update the LS mirror every 5 minutes. Even the FlexPod guides set it to 15.
    1. You could always run a pre-script that does a "manual" snapmirror update-ls-set before any backup job to be safe (agreeing with your example in a way)
  4. When you use the LS mirror, ONTAP defaults to **only** using the LS mirrors for client access. 
    1. To access the RW version, I think you need to mount: svm:/.admin 
    2. So, having the LS mirror on each node is actually a good thing for the clients. Stick with one LS mirror per node!
  5. One SVM_root can have many replicas They are managed as a single group. You update one, all get updated. very easy to manage.
Bottom line: stick with one LS mirror per node. It is quote easy to setup (on the CLI, have never seen it able to be done in the GUI, of course, I have never looked either)

--tmac

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam

I Blog at TMACsRack




On Fri, Apr 23, 2021 at 12:54 PM John Stoffel <john@stoffel.org> wrote:

Guys,

We're getting ready to do a DC move, so I'm working to setup all my
SnapMirror relationships to hopefully mirror all my data between my
two pairs (all in one cluster right now).  The idea being that if one
pair doesn't make the move, we have our data elsewhere.  No, we don't
have any swing gear, the system is out of Netapp support, and we do
have alot of spare space to mirror with.  Whew.  :-)

So I found the Netapp documents on "SVM Root Volume Protection
Express Guide" from 2016 and I've started to implement it here,
because if the worse case happens and one pair doesn't make it through
the move due to damage or loss or catastrophe, I need to be able to
bring up as much data as possible, as quickly as possible.

  https://library.netapp.com/ecm/ecm_download_file/ECMLP2496241

I do have some questions for you all, to see what other people are
doing, and what are current best practices and real life issues to
watch out for.

First, I did change my setup so that I mirror every 5 minutes, instead
of hourly as given in the quick guide.  Since we use Netbackup to make
clones of volumes to backup, that should hopefully work ok, but I
worry that now I need to add in a:

  snapmirror update-ls-set -source-vserver <vs> -source-volume rootvol

after I create a new clone, to make sure that the backup server will
see this clone volume and junction path.  Has anyone run into this
issue and is your workaround to just do the update by hand, or do you
just wait for the change to replicate? 

Second, this document recommends one LS mirror per-head in a cluster.
With a four node cluster, this seems a little crazy, especially since
I've never had any problems when doing failovers during upgrades in
the past.  My main goal is to just have the mirrors on seperate
pairs.  What do people think?  Is one rootvol and one rootvol_m1
mirror per VServer enough?

Third, if I go with just one LS mirror per-pair, I think it looks like I
really need to make two mirrors, one for the source pair, and one for
the opposite pair.  Which seems like a pain to manage, increases the
number of volumes and SM relationships to manage, etc.  Since my main
focus is NOT performance, since it's been working just fine without LS
mirrors in the past, is just having one LS mirror good enough?


Cheers,
John


_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters