Well, that’s not entirely correct.
You can also have round robin LACP which utilizes all ports in the channel to the same extent, or you can use Etherchannel which is a whole different story.
Besides that, what pNFS or Oracle Direct NFS do is they utilize their own bonding and link aggregation technologies, so in an ideal Oracle Direct NFS setup, you would not have to use switch assisted load balancing at all.
But if this is a shared system which also does other workloads, your design may vary from this configuration depending on what else you need to support and want to achieve (wrt resilency, automatic failover, etc.).
It gets even more interesting the more complicated your network setup is, if you f.ex. want to combine use MLAGs (Extreme Networks) or VSS (Cisco), you will have to go for some kind of LACP, but if you can work around that you will end up having a configuration like you have with your VMware ESXi hosts where there is no switch assisted loadbalancing and the virtualization software takes care of loadbalancing on a higher level of the stack.
Anyways, there’s no rule of thumb as to what you should do since it depends on so many factors which can only be outline by the requirements of you and your organization.
BTW some switches do allow you to precalculate the physical link used in an LACP configuration depending on the given source and destination addresses (whether that’s a L2 or an L3 address), so you can spin up multiple aliased ip addresses on both ends and try to achieve better utilzation of your channels while designing the network setup.
Best,
Alexander Griesser
System-Administrator
ANEXIA Internetdienstleistungs GmbH
Telefon: +43-5-0556-320
Telefax: +43-5-0556-500
E-Mail: ag@anexia.at
Web: http://www.anexia.at
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Adam Levin
Gesendet: Dienstag, 13. Mai 2014 23:54
An: tmac
Cc: <Toasters@teaparty.net>
Betreff: Re: Oracle access and backups
Jeff,
Regarding multiple network connections, you're correct. I'm not sure how Oracle's NFS handles it, but if you're using LACP to aggregate ports on a single Oracle host to a single filer, it's the switch that must choose where the traffic goes, and the algorithm that the switch uses is going to hash the MAC, IP, or some combo of them. Since it's a simple hashing algorithm, you will be limited to one port's worth of speed. While the filer will send data down all paths, and the host will also, at the switch a single destination path needs to be chosen. It's just how LACP works. Fan-in works great, so if you have lots of hosts connecting to shares on a 4 port multi-mode VIF on the filer, the load will be pretty well balanced, but a single host will always take the same path to a single port on the filer.
We got around that back when I did Oracle on a Solaris system to a NetApp. We simply worked with our DBAs to separate the data and indexes onto multiple shares, and spread those shares over four mountpoints (because we had four ports on the Sun system and four ports on the filer dedicated to the Oracle database).
So, we ended up with half the data files in one directory, half in another, and split the indexes similarly. Control files and redo logs were spread accordingly. So, instead of using port aggregation, we had four independent ports (actually, they were single-mode VIFs in pairs for failover, but nothing multi-mode or aggregated).
This may not be necessary with Oracle's NFS magic, but I'm thinking it would still be required because the decision for where to send the traffic is still happening on the switch.
-Adam
On Tue, May 13, 2014 at 5:36 PM, tmac <tmacmd@gmail.com> wrote:
I would have to be doing DR to worry about DR ;)
SImple Mirroring.
Hot backup. Take a snapshot. Exit Hot backup. Push mirror.
Same as you would with FC or another topology...no?
You can always snaprestore back to a point in time, depending on how many snapshots you have.
--tmac
Tim McCarthy
Principal Consultant
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014
On Tue, May 13, 2014 at 5:33 PM, Jeff Cleverley <jeff.cleverley@avagotech.com> wrote:
Tim,
I'll take a look at the link and speak with the group. I'll see if we can get some type of testing set up.
If you are running your database on nfs, how are you doing the DR? I can obviously quiesce the database and do a snapshot, but if it the main fs corrupts, how are you recovering using the snapshot? Are you using a flex clone, etc?
Thanks,
Jeff
On Tue, May 13, 2014 at 3:22 PM, tmac <tmacmd@gmail.com> wrote:
This is *not* your ordinary NFS nor is this pNFS. This is a NFS stack that ORACLE uses. Oracle's Direct NFS.
It is very stable abd better than a standard NFS with a host OS. In fact, I have been told that if you look at ORACLE
on all the different platforms it can run on, 95% of the NFS stack is the same, the only deviation is that of the host platform that dictates a few changes (like Windows, Linux, Solaris, etc)
It does a lot behind the scenes to utilize all connections. Again, this is not your standard NFS at play here. It is a more-or-less customized version that Oracle uses under the sheets so to speak.
Try setting it up in a dev environment.
Here is a useful link about it:
If using NFS now, you would
Shut down Oracle
create /etc/oranfstab (or $ORACLE_HOME/dbs/oranfstab)
Tell Oracle to use a new library for storage
Start Oracle.
--tmac
Tim McCarthy
Principal Consultant
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014
On Tue, May 13, 2014 at 5:07 PM, Jeff Cleverley <jeff.cleverley@avagotech.com> wrote:
Tim,
It has been our experience that even if there are multiple network connections on the server, when it makes an NFS connection to the NetApp, it will only use one of those connections. We are not using NFS4/PNFS. Is Oracle NFS basically PNFS?
We had a process (non-Oracle) on a system a few years ago. We tried giving it an nfs mounted file system. The transactions drove the filers in the dirt. We switched the to FC luns and set up a "local" file system on the server. It is much easier on the filers and faster than the nfs file system. We're concerned that using NFS of any type may cause this issue again.
Unfortunately we probably won't find out if it is too abusive until they run full steam. At that point stopping them and changing the process probably won't be very popular :-)
How do you do your backups/DR for your nfs file systems?
Thanks,
Jeff
On Tue, May 13, 2014 at 2:11 PM, tmac <tmacmd@gmail.com> wrote:
Personally, I would strongly encourage you to not use iSCSI.
Instead, look at using Oracle's NFS implementation.
It is fairly easy to setup. Give it a bunch of network adapters on the host
(like two or three) and point at the NetApp.
The client will use all the connections Oracle knows about and performance can really scream, especially if you can use 10GigE
--tmac
Tim McCarthy
Principal Consultant
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014
On Tue, May 13, 2014 at 4:02 PM, Jeff Cleverley <jeff.cleverley@avagotech.com> wrote:
Greetings,
My Oracle experience with and without NetApp has largely been non-existent. Please bear with me on this. All of our current DBs are on dedicated servers with locally attached storage.
One of our groups has a 6280 cluster running 8.1.2P4 7-mode. They want to look into using iscsi to a new 11.2 Oracle server. The cluster can get pretty busy at times so I'm not sure Oracle NFS will work in this case.
The questions are largely about backups and DR. I'm curious about how most people choose to back this up and how to recover for that solution. I know there are lun copy options, snapmanager/flex clone options, etc. We're very open to manual scripting and custom solutions. Backups would most likely go to a NearStore, and the DR would be a second server connected via iscsi also.
Thanks,
Jeff
--
Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
--
Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
--
Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters