I have just 1 database sitting on 2 volumes on netapp02 that are 600GB together. The database is only about 250GB or so. The volumes were sized that way to give sufficient performance (26 X 36GB drives). We are using QT SnapMirror to another filer's (netapp) volume so we can backup to tape from there. We would not want to do volume snapmirror since we will have to waste quite a bit of disk space in order to do so.
So now that we are doing DR, it seems logical to do it from the qtrees that were already snapmirrored on netapp. We really want to avoid having to do it from the original volumes since these disks are already being hit very hard by the production database.
We are actually considering adding more disks to the production database even though we are only 43% full just so we can get better performance. If we have to do volume snapmirrors this will at least double if not triple that requirement and it seems outrageous to have to do so.
Best solution is to have cascading Qtree SnapMirror. Anyone know if this is in the works? I can't believe that I'm the only one who needs this.
Derek
-----Original Message----- From: James Brigman [mailto:jbrigman@nc.rr.com] Sent: Monday, June 28, 2004 8:41 PM To: toasters@mathworks.com Subject: RE: DR project - SnapMirroring Oracle database
Derek, et. al;
I'll reply to you publicly on the list and see if I get a rise out of any of the others on Toasters.
First: I'd advise you to not yet rule out qtree snapmirrors because they won't cascade. Tell us all why you need more than one mirror to do DR and backup with?
Second: I just finished setting up qtree mirrors for a couple databases I've got. What I have, actually, are five volumes:
Orabackup Oraarch Oradata Orabase Oralog
I've got five databases, and five qtrees in each volume, one per database. And let me tell you: as bad as you might think qtree mirroring is, volume mirroring is worse for this configuration.
Anyway: you can do your backup any number of ways. The ways I considered are: 1) Shutdown of the database and mirror or snap of the "oradata" contents (database files themselves). 2) Do nightly full dumps and snap, mirror or backup those. 3) Put the database into hot backup mode and snap both the data and the logs.
There might be other, more creative ways, but I'm not a dba.
Anyway: the task was to mirror these existing volumes to an R200. Right away, a bad problem surfaced: my oralog volume on my production filer was only two 72GB drives. Doing volume mirroring would eat up huge amounts of space on the R200 destination filer for no good reason.
So I mirrored all the qtrees while doing a hot backup operation. Seems to work OK. Watch out! It takes extra space for the mirror snaps on the source filers!
For what it's worth: NetApp filers stand the OFA on it's head, and there don't seem to be many people walking around who know this yet. There are some good hints on Oracle configuration on now.netapp.com, but they don't go into extreme detail at all.
In the next couple weeks, I hope to change this on a few of my test/dev databases by making one huge volume and exporting the database mounts from qtrees. I will recover SIX parity drives for reuse as data if I do this!
Keep up the chatter. This is an important topic that few seem to understand well.
JKB
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Derek Lai Sent: Monday, June 28, 2004 1:09 PM To: 'toasters@mathworks.com' Subject: RE: DR project - SnapMirroring Oracle database
I have not gotten much response. Hopefully it is due to the timing of my Friday afternoon email.
If you have done a DR project with Database and SnapMirroring, I'd love to hear about your experience.
Over the weekend I just came across another big obstacle to using SnapMirroring - you can't cascade qtree SnapMirror! This one seems to be a very big restriction as it means wasting quite a bit of space to do things like backup and DR.
Derek
-----Original Message----- From: Derek Lai [mailto:Derek.Lai@onyxco.com] Sent: Friday, June 25, 2004 3:12 PM To: 'toasters@mathworks.com' Subject: DR project - SnapMirroring Oracle database
I'm in the process of architecting a DR project for mirroring our production Oracle database and would love to hear from those who have done so already.
We have been SnapMirroring the daily changes from one of our cluster F940 filer to the other for backup. In looking through the SnapMirror log I see that we are snapmirroring about 23GB of data on average. This is on a database of about 200GB total. We were thinking about using the max of the daily snapmirror number to size the pipe we need to the DR site. That is figuring to be a fair size pipe.
Now our DBA is telling me that using Oracle's archive log they could just ship the logs to the remote site and play it back and achieve pretty much the same result. And the archive logs are averaging only about 4.5GB. That is 1/5 the size. It is fairly significant when you are shipping it half-way across the country! (We are in Foothill Ranch, CA and our DR site is in St. Louis). If that is true then management will probably choose to go with log shipping as opposed to SnapMirroring the data.
I'm going through the data that we are SnapMirroring to figure out if there is something wrong with our setup. I think we are actually SnapMirroring the archive logs as well. Has anyone else been down this path? Any other ideas or suggestions? We are thinking that the log shipping method is going to involve alot more manual work and would like to avoid it if possible but we need to have good justification for that as well.
Derek
We've got a similar arrangement here. We decided that Snapmirror wasn't the way to go. There's really no way around it. We found it was much easier to just copy the archive logs around, and keep the standby DB as a hot standby DB, either with or without managed recovery (dealer's choice.) The problems with Snapmirror were:
1. Too high of a startup cost. In our case, we had to copy almost 10 GB for every Snapmirror. That can be made easier with a QT Snapmirror, but it's still awfully expensive. And that's on top of the changed blocks. I only have about 5 GB of that each day. FTP doesn't have that overhead and I'm looking forward to 9i, when I can use Dataguard.
2. Oracle doesn't like read only datafiles (which is what you get with an unbroken Snapmirror.)
3. You cannot actually open your DR DB. I don't know what your environment it, but if you boss wants to actually go to the DR site and connect to the DB, it's going to be awfully hard without breaking and resyncing the mirrors. With the hot standby solution, you can open the DB read only (requires enterprise license)
4. There's no way without opening your DB that all of the datafiles are in good shape. In this case, DR is like backups, it's not a backup in that if you don't verify your backups, it's not backed up. I'd hate to bet my company on that.
If anyone has contrary advice, I'd love to hear how it's done. Doesn't mean I'd change my mind (took me about 45 minutes to set up the archive log transfer, and it's worked flawlessly for 18 months), but I always like to learn stuff. To be honest, I'm planning on starting to use Snapmirror to keep a 4th copy of my database in the primary datacenter (synced every 5 minutes or so), I just don't think you can use it where you need verification that something worked.
Now, I'm totally down with assembling the DR DB using Snapmirror. I couldn't ask for anything easier there. it's just maintaining it where I had a problem.
Jason
On Jun 30, 2004, at 2:16 PM, Derek Lai wrote:
I have just 1 database sitting on 2 volumes on netapp02 that are 600GB together. The database is only about 250GB or so. The volumes were sized that way to give sufficient performance (26 X 36GB drives). We are using QT SnapMirror to another filer's (netapp) volume so we can backup to tape from there. We would not want to do volume snapmirror since we will have to waste quite a bit of disk space in order to do so.
So now that we are doing DR, it seems logical to do it from the qtrees that were already snapmirrored on netapp. We really want to avoid having to do it from the original volumes since these disks are already being hit very hard by the production database.
We are actually considering adding more disks to the production database even though we are only 43% full just so we can get better performance. If we have to do volume snapmirrors this will at least double if not triple that requirement and it seems outrageous to have to do so.
Best solution is to have cascading Qtree SnapMirror. Anyone know if this is in the works? I can't believe that I'm the only one who needs this.
Derek
-----Original Message----- From: James Brigman [mailto:jbrigman@nc.rr.com] Sent: Monday, June 28, 2004 8:41 PM To: toasters@mathworks.com Subject: RE: DR project - SnapMirroring Oracle database
Derek, et. al;
I'll reply to you publicly on the list and see if I get a rise out of any of the others on Toasters.
First: I'd advise you to not yet rule out qtree snapmirrors because they won't cascade. Tell us all why you need more than one mirror to do DR and backup with?
Second: I just finished setting up qtree mirrors for a couple databases I've got. What I have, actually, are five volumes:
Orabackup Oraarch Oradata Orabase Oralog
I've got five databases, and five qtrees in each volume, one per database. And let me tell you: as bad as you might think qtree mirroring is, volume mirroring is worse for this configuration.
Anyway: you can do your backup any number of ways. The ways I considered are:
- Shutdown of the database and mirror or snap of the "oradata"
contents (database files themselves). 2) Do nightly full dumps and snap, mirror or backup those. 3) Put the database into hot backup mode and snap both the data and the logs.
There might be other, more creative ways, but I'm not a dba.
Anyway: the task was to mirror these existing volumes to an R200. Right away, a bad problem surfaced: my oralog volume on my production filer was only two 72GB drives. Doing volume mirroring would eat up huge amounts of space on the R200 destination filer for no good reason.
So I mirrored all the qtrees while doing a hot backup operation. Seems to work OK. Watch out! It takes extra space for the mirror snaps on the source filers!
For what it's worth: NetApp filers stand the OFA on it's head, and there don't seem to be many people walking around who know this yet. There are some good hints on Oracle configuration on now.netapp.com, but they don't go into extreme detail at all.
In the next couple weeks, I hope to change this on a few of my test/dev databases by making one huge volume and exporting the database mounts from qtrees. I will recover SIX parity drives for reuse as data if I do this!
Keep up the chatter. This is an important topic that few seem to understand well.
JKB
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Derek Lai Sent: Monday, June 28, 2004 1:09 PM To: 'toasters@mathworks.com' Subject: RE: DR project - SnapMirroring Oracle database
I have not gotten much response. Hopefully it is due to the timing of my Friday afternoon email.
If you have done a DR project with Database and SnapMirroring, I'd love to hear about your experience.
Over the weekend I just came across another big obstacle to using SnapMirroring - you can't cascade qtree SnapMirror! This one seems to be a very big restriction as it means wasting quite a bit of space to do things like backup and DR.
Derek
-----Original Message----- From: Derek Lai [mailto:Derek.Lai@onyxco.com] Sent: Friday, June 25, 2004 3:12 PM To: 'toasters@mathworks.com' Subject: DR project - SnapMirroring Oracle database
I'm in the process of architecting a DR project for mirroring our production Oracle database and would love to hear from those who have done so already.
We have been SnapMirroring the daily changes from one of our cluster F940 filer to the other for backup. In looking through the SnapMirror log I see that we are snapmirroring about 23GB of data on average. This is on a database of about 200GB total. We were thinking about using the max of the daily snapmirror number to size the pipe we need to the DR site. That is figuring to be a fair size pipe.
Now our DBA is telling me that using Oracle's archive log they could just ship the logs to the remote site and play it back and achieve pretty much the same result. And the archive logs are averaging only about 4.5GB. That is 1/5 the size. It is fairly significant when you are shipping it half-way across the country! (We are in Foothill Ranch, CA and our DR site is in St. Louis). If that is true then management will probably choose to go with log shipping as opposed to SnapMirroring the data.
I'm going through the data that we are SnapMirroring to figure out if there is something wrong with our setup. I think we are actually SnapMirroring the archive logs as well. Has anyone else been down this path? Any other ideas or suggestions? We are thinking that the log shipping method is going to involve alot more manual work and would like to avoid it if possible but we need to have good justification for that as well.
Derek