Using different SVMs in cDOT won’t buy you anything other than not being able to access data between SVMs. A SVM is a virtual resource that spans all nodes. You can create volumes on any node in a cluster and use a single SVM to accomplish what you are trying to do in multiple SVMs. In 7mode, you were even more limited to a single node at a time, which is why cDOT rocks. ;)

 

Balancing the workload in cDOT is easy and if you need to rebalance later, it’s non-disruptive with vol move.

 

The junction path design is going to be a minimal issue. Simply junction off vsroot and spread the volumes out across your cluster nodes/aggrs. If you need to retire a database/dataset, vol move to SATA. Need perf? Vol move to two AFF nodes you added to your cluster. The design of cDOT is ideal for this type of scenario.

 

So something like this:

 

/ -> vsroot

/RAC1    /RAC2    /RAC3    /LOGS1    /LOGS2

 

There are some excellent Oracle/cDOT docs out there and the author will very likely be doing some insight sessions on the topic.

 

http://www.netapp.com/us/media/tr-3633.pdf

http://www.netapp.com/us/media/tr-4145.pdf

 

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Milazzo Giacomo
Sent: Monday, July 06, 2015 4:20 AM
To: Toasters
Subject: cDOT and Oracle

 

Hi everybody,

 

one million dollar questions ;-) aka transition to cDot could be really a nightmare!

The questions come not only from best practice and TR I’ve read (or I could read…) but your ideas about best ways to bring in cDot what now is deeply optimized in 7Mode.

 

 

AS is - 7 Mode

A lot of NFS volumes where to separate data/log/temp of each Oracle instance. Each volume is exported for access only to the hosts that manage binaries and db.

In the production environment all instances are RAC while in development test most of them are RAC but some are also composed just by one server.

90% of hosts are VMware VM, some are phyical for RAC instances composed by a phys-virtual pair of hosts.

Volumes are hosted on aggregates composed by SAS or FC discs depending on type and importance of workload. In test/dev there are mostly SATA vols but non only.

Of course, all the instances are managed and replicated with the concurrent usage of Snapcreator and its integration with DFM Protection Manager that it’s then integrated with Snapmirror

 

 

To be – cDOT (8.3.1) <- we hope to be able to use SVM DR features somewhere…

Each instance in a its own SVM? This could be a nightmare to manage ipspaces, dedicated lifs, flexvol that once assigned are locked to that SVM and so on….

Just one SVM for all NFS RAC? Maybe creating differente FlexVols with qtree for data/log/temp? Or on different aggregates?

One or more SVM, few, just to balance the workloads between controllers?

Is there an advantage offered by the juction paths that should be created?

 

Every idea is accepted also is recently has been demonstrated that brainstorming is useless J

 

 

Regards,

 

 

 

Dott. Giacomo Milazzo

Senior Consultant & Technical Account Manager

mobile: +39 340.6001045

@-mail: g.milazzo@sinergy.it

Web: http://www.sinergy.it

cid:4DB1B876-8A76-4829-9B40-9B136E344F73@fastwebnet.it

SINERGY SpA   Viale dei Santi Pietro e Paolo 50

00144 - Roma RM  Tel. +39 06 44243674 Fax +39 06 44245272