I agree. Snapshots at a qtree level would be great. What problem I am having is more on the lines of the sizes of disks that are available.
We have multiple oracle Databases of different sizes. However, most of our databases are around 20 - 30 gigs.
Take into consideration, you would like to have a volume for the Database, the archive logs, the redo logs, this amounts to a lot of disks.
Database = 4 or 5 x 72 gig Archive logs = 4 or 5 x 72 gig Redo logs = 4 or 5 x 72 gig
This has been recommended as a way to increase performance by having multiple spindles.
Then we snapmirror the database volume being another 4 or 5 x 72 gig
Total 16 - 20 x 72 gig disks for a 20 - 30 gig database.
Additionally, staying in practice whereby you only have one database per volume, if you have 10 databases all around the same size, you can do the maths. Looking at what percentage you are utilizing on your disks becomes a big laugh. Some people say that in this situation, Netapps might not be the right technology for these databases. However, our recovery is far quicker than any other solution I have seen. Plus ease of administration is of great advantage. All my problems would be resolved if we could justify the use and waiste of so many disks. I wonder, having multiple small databases on the same volume with qtree snapshots would be something of the future. Or maybe not. Any recommendations would be appreciated. Thank
Kevin
-----Original Message----- From: Brian Tao [mailto:taob@risc.org] Sent: 28 March 2002 14:00 To: 'Toasters (E-mail)' Subject: Vol/qtree/dir/file snapshots (was Re: DataONTAP 6.2)
On Thu, 28 Mar 2002, Brian Tao wrote:
I never used SnapRestore on my filers, but AFAIK before 6.2 you
could only SnapRestore an entire volume. 6.2 adds the ability to do a single-file SnapRestore. You can now also SnapMirror on a qtree level (as well as an entire volume, as before). However, Snapshots are still done at a volume level. Someone correct me if I'm wrong. :)
Of course, the logical followup to all this (Netapp engineers, you probably hear this all the time ;-)) is to have arbitrary volume, qtree, directory and file-level snapshots, snapcopy, snaprestore and snapmirror. Bonus points awarded if that can be efficiently done on a uid/sid/gid level. Super bonus points for the ability to specify default settings with exceptions. e.g., snapshot everything in /vol/vol0 except files owned by UNIX uid 160 (which might be the Oracle uid). DOT 7.0? :)
Actually, what I *really* want is qtree-level snapshots. Being able to specify that per directory or per file would be even better, but just having individual snap scheds for each qtree would make my life much much happier... :)
"Noll, Kevin" wrote:
I agree. Snapshots at a qtree level would be great. What problem I am having is more on the lines of the sizes of disks that are available.
I did wonder if the solution to this type of problem would be to allow multiple volumes per raid group. Then you could have smaller volumes while keeping up the spindle count.
This does kind of turn things on their head. But it might be easier to implement than qtree snapshots.
It did occur to me that you could get the effect of qtree snapshots if you allowed selective deleting of snapshots. Ie the "snap create vol name" would create a snapshot for the whole volume (quickly), but a "snap remove vol path name" would remove part of that snapshot (which could take longer). But now I think about it, I can't think how you tidy up the directories with removed entries, or linked files with one link removed.
- Bruce
-- Bruce Arden arden@nortelnetworks.com CSC, Nortel, London Rd, Harlow, England +44 1279 40 2877