We run Oracle 9i and 10g on 144gb disks. We had to create very large volumes to get the performance we wanted. For example one of our volumes is composed of three 12/1 raid groups. We have data and logs in the same qtree, but also have another qtree on our opposite filer (in a cluster) as a second oracle logging qtree. We don't even put our databases in hot backup mode when we need to make a copy -- just snapmirror it whenever and bring it up in recovery mode. It's worked about 2000 times so far w/ no problems.
I'm not a DBA, by the way. So don't ask me details about the oracle environment :)
Unfortunately I didn't leave room for raid-dp in the original setup as I didn't know it was going to come out. So we'd have to order additional disks, but we'd use it if we could.
On Wed, 8 Dec 2004, rkb wrote:
Had a meeting yestarday with my DBA team. We were discussing going to production with RAC boxes. They want to distribute Oracle across 3 volumes, Data, Redo, Archive. They prefer small volumes in the neighborhood of 100 Gig espeically for the Data Vols since we own SnapRestore. We'll only put one Oracle DB on a data Volume. Surely though we won't need 100 Gig especially since we are stripping the Redo and Archive logs out of there.
I have all 72 Gig drives and warned that a 2 or 3 drive Volume is probably too small a number of disks. We are not at 6.5 (yet) so we didn't discuss Double Parity but its certainly on my agenda. So I'm counting on 2 parity drives when we get there. So minimally they want a 3 disk volume for (2 parity drives) for Oracle.
What say you all? Would you, are you putting the hot writing Redo logs on a volume with other stuff? How is contention for that Volume? I told them "think differently about NetApp". Its not JBOD...
I told them to think about that 128 MB (we have a FAS940) cache as "your disk" and not to worry about how NetApp flushes those writes out to disk. Think large volumes not smaller ones.
Does Data Fabric Mgr. help identify "hot disks"? Statit?
TIA
-Bob Borowicz