I can't really offer any advice on your RAID layout without understanding a lot more about your application. All I can tell you is that we're quite pleased with Oracle's performance on our F880. We have just about everything on 1 13 disk volume (including binaries). The "just about" means that things that should be multiplexed, like control files and redo logs are written to 2 different NetApps. It sure makes management easy. If our Oracle server fails. we about 10 minutes to recovery on a spare server. We love the performance, it's wiping our (former) Summetrix all over the floor. We do not have a switch between our NetApp and the Oracle server. We tried that initially, and the poor switch was being flooded by all of the traffic between the 2 ports. Oracle performed fine, but everything else slowed way down. I'd definitely recommend you build a VIF out of 2 GigE lines though. Otherwise you can see a big drop in performance.
About the only thing we lost by connecting them directly was some of the Cisco switch reporting functions, and we have to go somewhere than our switch MRTG to see the traffic. Overall, not a bad trade.
Jason
On Tue, 2003-09-23 at 22:25, Bob Borowicz wrote:
I agree... Good advice.
So I have a simpler (perhaps) question. We are looking at placing Oracle data files on the filer. I have FAS940 with 56 72 Gig drives. When I planned the Raid Group, my main goal was to minimize the "waste" of a 72 Gig drive for parity.
I subtracted 2 drives for root, 2 for hot spares and was left with 52 drives which divided into 4 13 Disk sets. So I made the my Raid Group 13.... A bit high I knew but minimizing "waste"...
Now we are planning to tie 4 Oracle servers to the box and I need at least "X" gig for Oracle. (I'm still waiting for a forecast or even a WAG from the DBA's)
Would *you* locate Oracle on a 13 Disk Raid Group?
I say no because we will have a "dev/test/prod1/prod2" set of Oracle servers with different access patterns. And the math works. If I change the Raid Group to say 6 I could take two of my existing 13 disk Volumes and split them into 4 6 disk volumes and have 2 additional drives for Spares.
What would you do???
-Robert
P.S. For those doing Oracle do you use a network switch between your Toaster and the Oracle servers?
Jim Harm wrote:
Let's start a warm thread.
My two cents is to:
- build a ten disk raid from the 16 spares and set raid size to 9+1
- migrate one of your eight disk raid/volumes to it as a qtree
- destroy unused eight disk/volume
- build another ten disk raid from left overs and previous volume
- add this raid to the previous ten disk raid/volume to make big volume
- migrate the next eight disk raid/volume to big volume as another qtree
- continue to destroy old raids and migrate old to new volume as qtrees
- when you are done you should have one big volume of four 10 disk raids and two hot spare disks.
I've had two disks fail on one system once in 4+ years and that was during the disk firmware upgrade that only temporarily failed them (had to power cycle). We have had several TB of data on NetApp filers at this sight over that period.
Use qtrees for all your exports to control space usage instead of building several volumes/raid strategies. It's simpler and more flexible. You can reapportion the space with little pain and no raid or volume reconfigures just by changing quotas.
Some may say "Whoa, you have to have a separate root volume!" I say "Baloney!. The only time I came close to losing a root volume was in the infancy of NetApp; we did have to reboot and use a different volume as root volume (I had copied the /etc to the non-root volume) because of a problem with the wack program, but even then we recovered the volume. Another concern might be that "We're afraid the root volume will get full!" which you can easily avoid by judicious and simple quotas. The only other thing I can think of is the time it takes to back up and restore by volume size.
At 11:56 AM -0400 9/23/03, Patricia A. Dunkin wrote:
Our F760 has 6 shelves and 42 36GB drives (running Data OnTAP 6.3.3 if that matters). Four volumes have been configured, with one RAID group in each volume; two have eight disks each, one has seven, and one that is mostly inactive archived stuff has four disks. That leaves fifteen (count 'em, 15) spares.
All the volumes except the archive are at or near 90% of capacity, a point at which I understand performance starts to plummet, and the storage needs of the users of the three active volumes continue to increase.
What is the optimal way to make use of the oversized pool of spares? Thoughts I've had:
- Create a new volume, put new projects there, and maybe move
some projects over from existing volumes, so everyone who needs it has room to grow.
- Add disks to the existing volumes gradually as needed, to keep
capacity under 90%. If I do this, is it better to add new disks one at a time or several at once? Is going over 90% really a problem, or is that just unfounded rumor?
Other suggestions are welcome. Also, how many spares would it be appropriate to keep as spares? I've been told that one per shelf would be enough, but some postings in the archives indicate that that may be on the generous side.
Thanks!
Patricia Dunkin Lucent Technologies pdunkin@lucent.com 600 Mountain Avenue 3C-306C Phone: 908-582-5843 Murray Hill, NJ 07974-0636 Fax: 908-582-3662 Pager: 888-371-8506 Mailto: 8883718506@skytel.net