If there is a snapmirror license, I like using DP mirrors to protect vsroot.  A KB covers using both LS and DP.  With DP you don’t have to update the mirrors when you make any changes… the recovery is two commands.. snapmirror break and vol make-vsroot instead of snapmirror promote.  I never have had to recover though… and in testing found that you can make-vsroot any volume and it picks up the namespace as is without having a mirror (but only with the svm running).

 

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Borzenkov, Andrei
Sent: Thursday, April 23, 2015 9:13 PM
To: Francis Kim
Cc: toasters@teaparty.net
Subject: Re: cDOT provisioning strategy

 

Do not forget that SVM root volume is critical - if it is lost you lose access to SVM - so best practice is to setup LS mirrors of root volume on separate physical aggregates. You can then promote mirror copies if something happens.

 

And of course do not put any user data on root - only mount points should be there.

Отправлено с iPhone


23 апр. 2015 г., в 19:35, Francis Kim <fkim@BERKCOM.com> написал(а):

To keep SVM management complexity down to a minimum, start with one data SVM for the entire cluster first, then add SVMs to accommodate workload/management separation.

 

Aside from the location of the SVM’s root volume, there is no direct relationship between an SVM an a particular node.  The relationship between an SVM and a particular node is determined by what node/port the SVM’s LIF lives on (NAS or SAN) and what node has the aggregate that contains the volume the SVM is serving out.  For example, if an SVM has its root volume on an aggregate in node1 but all its data volumes and LIFs are on node2, then node2 is doing the great majority of work.  Yes, the SVM’s root_volume is on an aggregate on node1, but the processing that takes place on the root_volume is limited to namespace traversal, therefore minimal.  You might even say that in this case the SVM is “tied” to node2, since it’s node2 that’s doing all the heavy lifting.  NetApp SEs, especially during the early days of CDOT adoption, have been known to spread this idea of creating an SVM for each node in the cluster, which is not really correct.

 

Think of SVM as a container of resources that ought to be managed as a unit.  If you want to delegate the management of such units of management, then an SVM is a convenient construct.  Secure multitenancy is a very good use case for multiple SVMs.  I’ve seen customers spin up SVMs to isolate workloads such as virtualization, NAS file serving, and Exchange databases.  I’ve also seen customers create separate SVMs in anticipation of NetApp developing SVM level DR capability in the future, similar to 7-mode’s vfiler dr.

 

 

Francis Kim | Engineer

510-644-1599 x334 | fkim@berkcom.com

 

BerkCom | www.berkcom.com

NetApp | Cisco | Supermicro | Brocade | VMware

 

On Apr 23, 2015, at 9:08 AM, Momonth <momonth@gmail.com> wrote:

 

Hi,

We are (finally) getting close to go live with cDOT clusters. I know
some of you guys are already "happy customers" and I would like to
hear your thoughts on the following:

1. SAN: Create one SVM per physical node? Or multiple SVMs per node
and then combine LUNs, that belong to a certain application, on a
single SVM?

2. NAS: pretty much the same questions, pros and cons of having
multiple SVMs. What are your criteria when you decide to spin up a new
SVM?

SVM conception seems to give quite some flexibility, but I can't get
my head around as to when / how to apply this flexibility. I've never
used vfilers in 7-Mode world.

Cheers,
Vladimir
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters

 

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters