Excellent advice.  I would advise the dedicated root volume only if snap restore will be needed.  If you have to snap restore the root volume, then a reboot will be required for the change to take.

Jeff Mery, MCP
National Instruments

-------------------------------------------------------------------------
"Allow me to extol the virtues of the Net Fairy, and of all the fantastic
dorks that make the nice packets go from here to there. Amen."
TB - Penny Arcade
-------------------------------------------------------------------------



Jim Harm <jharm@llnl.gov>
Sent by: owner-toasters@mathworks.com

09/23/2003 12:03 PM

       
        To:        pdunkin@lucent.com (Patricia A. Dunkin), toasters@mathworks.com
        cc:        
        Subject:        Re: How many spares? Best way to use extras?



Let's start a warm thread.

My two cents is to:
1. build a ten disk raid from the 16 spares and set raid size to 9+1
2. migrate one of your eight disk raid/volumes to it as a qtree
3. destroy unused eight disk/volume
4. build another ten disk raid from left overs and previous volume
5. add this raid to the previous ten disk raid/volume to make big volume
6. migrate the next eight disk raid/volume to big volume as another qtree
7. continue to destroy old raids and migrate old to new volume as qtrees
8. when you are done you should have one big volume of four 10 disk raids
                and two hot spare disks.

I've had two disks fail on one system once in 4+ years and that was during
the disk firmware upgrade that only temporarily failed them
(had to power cycle).
We have had several TB of data on NetApp filers at this sight over that period.

Use qtrees for all your exports to control space usage instead of
building several volumes/raid strategies.
It's simpler and more flexible.
You can reapportion the space with little pain and
no raid or volume reconfigures just by changing quotas.

Some may say "Whoa, you have to have a separate root volume!"
I say "Baloney!.
The only time I came close to losing a root volume was in the infancy
of NetApp; we did have to reboot and use a different volume as root volume
(I had copied the /etc to the non-root volume)
because of a problem with the wack program,
but even then we recovered the volume.
Another concern might be that "We're afraid the root volume will get full!"
which you can easily avoid by judicious and simple quotas.
The only other thing I can think of is the time it takes to back up and
restore by volume size.

At 11:56 AM -0400 9/23/03, Patricia A. Dunkin wrote:
>Our F760 has 6 shelves and 42 36GB drives (running Data OnTAP
>6.3.3 if that matters).  Four volumes have been configured, with
>one RAID group in each volume; two have eight disks each, one has
>seven, and one that is mostly inactive archived stuff has four
>disks.  That leaves fifteen (count 'em, 15) spares.
>
>All the volumes except the archive are at or near 90% of
>capacity, a point at which I understand performance starts to
>plummet, and the storage needs of the users of the three active
>volumes continue to increase.
>
>What is the optimal way to make use of the oversized pool of
>spares?  Thoughts I've had:
>
>- Create a new volume, put new projects there, and maybe move
>  some projects over from existing volumes, so everyone who
>  needs it has room to grow.
>
>- Add disks to the existing volumes gradually as needed, to keep
>  capacity under 90%.  If I do this, is it better to add new
>  disks one at a time or several at once?  Is going over 90%
>  really a problem, or is that just unfounded rumor?
>
>Other suggestions are welcome.  Also, how many spares would it be
>appropriate to keep as spares?  I've been told that one per shelf
>would be enough, but some postings in the archives indicate that
>that may be on the generous side.
>
>Thanks!
>--
>Patricia Dunkin                                  Lucent Technologies
>pdunkin@lucent.com                 600 Mountain Avenue 3C-306C
>Phone: 908-582-5843                 Murray Hill, NJ 07974-0636
>Fax:   908-582-3662                 Pager: 888-371-8506  Mailto: 8883718506@skytel.net

--
}}}===============>>  LLNL
James E. Harm (Jim); jharm@llnl.gov
System Administrator, ICCD Clusters
(925) 422-4018 Page: 423-7705x57152