I would appreciate if someone could share how they break the various Oracle parts (pls excuse, I'm not a DBA), ie. data, logs, archives, & etc. amongst the various volumes in a NetApp cluster.
I have an Oracle instance running on a singe F840 that very shortly is going to be clustered. The Oracle binaries are in a separate qtree and the rest the DBAs put in another qtree that I created for them. The qtrees and mount are by the SID. (HR)
On the toaster /vol/db/<sid>/archs archive logs /ctl control files /data dbf files /rbs dbs files /redo redo logs /system /temp
this gets mounted on a Sun DB server as filer:/vol/db/HR /rmounts/HR /oradata/HR --> /rmounts/DB/HR The DBAs refer to the database by a slink /oradata/HR in this case.
A new mandate is to utilize the clustered F840s and to separate the database (.dbf), the redo and archive logs.
In the cluster env there are now 4 volumes, 2 on each head. One for database, the other for logs (one has snapshots the other does not). Production DB is on one head, QA on the other. filer: A filer: B ------------------------ vol DB vol DB vol log vol log
Do you folks that spread your logs end up with multiple nfs mounts per database? Do I suggest to my DBAs that I will have something like:
A:/vol/db/HR/u01/data /rmounts/HR/u01 /rbs /system /temp
A:/vol/log/HR/u02/redo1 /rmounts/HR/u02 /arch1 /ctl1
B:/vol/log/HR/u03/redo2 /rmounts/HR/u03 /arch2 /ctl2 /oradata/HR --> /rmounts/HR
So I end up with 3 nfs mounts for each instance of the database? How do you people do this? There are about 10 other DBs waiting. What about a DSS; a reporting database? At the present time with all of the DB components on one volume all the DBAs have to snap the one volume and they can start up a DSS instance.
In this new scenario, I guess they will still have to snap the DB volume and also snap one of the LOG volumes. What do you do?
Sorry for being so winded but I was looking for specifics, unfortunately I couldn't find anything on either the NOW site or the archive list. Just the White Papers that describe the concept.
Thanks, George
------------------------------------------------------------------------------- George Kahler e-mail: george@yorku.ca Sr. Systems Administrator humans: (416) 736-2100 x.22699 Computing and Network Services machines: (416) 736-5830 Ontario, Canada, M3J-1P3
George,
When do you need this by?
We have about 24 instances supported by about 10 SUN servers that we are about to set up a cluster with. Yes QA on one side and live DB on the other side is a good way to go as it will reduce the confusion of what is where in each side of the cluster.
We have mounts that get to areas for files, however, we have sub directories that are broken down by SID (Oracle term for a unique DB instance). We then use .cshrc<SID> files for each SID and only have about 4 mount points per server. We then just establish an environment into the proper paths for the specific database or SID.
I can dig up more info for you on Monday. We have been running like this for a couple of years and like the ease of layout.
By the way, I would keep a local copy of the control files on your SUN server. The only problem we have had with Network or power interruptions to the filer is a lock that might be held on a control file. We have 3 control files per instance. 2 on the filer on different volumes, and one on the SUN box. We have seen about a 1% chance of a control file having a lock that didn't get cleared after a hard reboot (power went out). When that happens the Oracle instance will tell you which control file is causing the problem....you then delete the problem file and replace it with a copy of one of the other 2 files. Then up and away you go again.
So, when do you need the full info of how we lay out our system? I can get you more info starting Monday.
Joe Database Systems Engineer Institutional Computing Information Services Jet Propulsion Laboratory
At 10:30 AM 12/7/02 -0500, George Kahler wrote:
I would appreciate if someone could share how they break the various Oracle parts (pls excuse, I'm not a DBA), ie. data, logs, archives, & etc. amongst the various volumes in a NetApp cluster.
I have an Oracle instance running on a singe F840 that very shortly is going to be clustered. The Oracle binaries are in a separate qtree and the rest the DBAs put in another qtree that I created for them. The qtrees and mount are by the SID. (HR)
On the toaster /vol/db/<sid>/archs archive logs /ctl control files /data dbf files /rbs dbs files /redo redo logs /system /temp
this gets mounted on a Sun DB server as filer:/vol/db/HR /rmounts/HR /oradata/HR --> /rmounts/DB/HR The DBAs refer to the database by a slink /oradata/HR in this case.
A new mandate is to utilize the clustered F840s and to separate the database (.dbf), the redo and archive logs.
In the cluster env there are now 4 volumes, 2 on each head. One for database, the other for logs (one has snapshots the other does not). Production DB is on one head, QA on the other. filer: A filer: B
vol DB vol DB vol log vol log
Do you folks that spread your logs end up with multiple nfs mounts per database? Do I suggest to my DBAs that I will have something like:
A:/vol/db/HR/u01/data /rmounts/HR/u01 /rbs /system /temp
A:/vol/log/HR/u02/redo1 /rmounts/HR/u02 /arch1 /ctl1
B:/vol/log/HR/u03/redo2 /rmounts/HR/u03 /arch2 /ctl2 /oradata/HR --> /rmounts/HR
So I end up with 3 nfs mounts for each instance of the database? How do you people do this? There are about 10 other DBs waiting. What about a DSS; a reporting database? At the present time with all of the DB components on one volume all the DBAs have to snap the one volume and they can start up a DSS instance.
In this new scenario, I guess they will still have to snap the DB volume and also snap one of the LOG volumes. What do you do?
Sorry for being so winded but I was looking for specifics, unfortunately I couldn't find anything on either the NOW site or the archive list. Just the White Papers that describe the concept.
Thanks, George
George Kahler e-mail: george@yorku.ca Sr. Systems Administrator humans: (416) 736-2100 x.22699 Computing and Network Services machines: (416) 736-5830 Ontario, Canada, M3J-1P3
On Sat, 07 Dec 2002 08:24:22 -0800, Joseph Bishop Joseph.Bishop@jpl.nasa.gov wrote:
George,
When do you need this by?
We have about 24 instances supported by about 10 SUN servers that we are about to set up a cluster with. Yes QA on one side and live DB on the other side is a good way to go as it will reduce the confusion of what is where in each side of the cluster.
We have mounts that get to areas for files, however, we have sub directories that are broken down by SID (Oracle term for a unique DB instance). We then use .cshrc<SID> files for each SID and only have about 4 mount points per server. We then just establish an environment into the proper paths for the specific database or SID.
I can dig up more info for you on Monday. We have been running like this for a couple of years and like the ease of layout.
Thanks, Monday is fine. I have to propose something to my DBAs soon as the new F840 head is arriving next week and then its Xmas. The time to do the re-org.
By having only about 4 mount points, were you implying that you mount at the volume level and not each individual SID qtree level ? Therefore reducing the number of mounts? If yes, I thought about that too but folks here are hesitant since each DB server will be able to see other good things that are in that volume.
George ------------------------------------------------------------------------------- George Kahler e-mail: george@yorku.ca Sr. Systems Administrator humans: (416) 736-2100 x.22699 Computing and Network Services machines: (416) 736-5830 Ontario, Canada, M3J-1P3