Hi Bob,
Go for enough spindles. So go for large volumes and a lot of spindles.
You can separate logs and data because the have a totally different
behaviour, depending on the I/O load of that DB). Ontap 7g will solve
your problem (you can give them small volumes and you can use a lot of
spindles in your aggregate. But it's maybe to soon to go to 7g. This
depends on your upgrade strategy and the risk that you want to take.
DFM can indeed identify your hotspots. We use that, but we also use
statit (with a shedulded task (rsh script) and put the results in our
management system).
Reinoud
UZLeuven
Belgium
-----Oorspronkelijk bericht-----
Van: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
Namens rkb
Verzonden: woensdag 8 december 2004 12:16
Aan: toasters(a)mathworks.com
Onderwerp: Oracle Volumes
Had a meeting yestarday with my DBA team. We were discussing going to
production with RAC boxes. They want to distribute Oracle across 3
volumes, Data, Redo, Archive. They prefer small volumes in the
neighborhood of 100 Gig espeically for the Data Vols since we own
SnapRestore. We'll only put one Oracle DB on a data Volume. Surely
though we won't need 100 Gig especially since we are stripping the Redo
and Archive logs out of there.
I have all 72 Gig drives and warned that a 2 or 3 drive Volume is
probably too small a number of disks. We are not at 6.5 (yet) so we
didn't discuss Double Parity but its certainly on my agenda. So I'm
counting on 2 parity drives when we get there. So minimally they want a
3 disk volume for (2 parity drives) for Oracle.
What say you all? Would you, are you putting the hot writing Redo logs
on a volume with other stuff? How is contention for that Volume? I told
them "think differently about NetApp". Its not JBOD...
I told them to think about that 128 MB (we have a FAS940) cache as "your
disk" and not to worry about how NetApp flushes those writes out to
disk. Think large volumes not smaller ones.
Does Data Fabric Mgr. help identify "hot disks"? Statit?
TIA
-Bob Borowicz