Having a set of Filers that have been poorly planned and poorly configured, I would ask for a hand from fellow admins about how to proceed. (okay, I know the best way, buy a new cluster of 840's and migrate it all!)
Given that I have 12 "36GB" disks (plus a hot spare) that have been added to one of my filers, should I:
a) Build one large volume and control it's space via qtrees? a1) If so, should I lower the raidgroup size or leave it at 14? b) Build multiple (2? 3? 4?) volumes of smaller size
Pro's & Con's for each approach would be greatly appreciated!
Steve Vawter voice: 408-490-5310 fax: 408-490-8615 Staff UNIX Systems Administrator Steve.Vawter@C-Cube.COM
On Wed, 21 Feb 2001, Steve Vawter wrote:
a) Build one large volume and control it's space via qtrees? a1) If so, should I lower the raidgroup size or leave it at 14? b) Build multiple (2? 3? 4?) volumes of smaller size
Two main considerations I take into account when making the grow/split decision: snapshots and damage control.
First, snapshots can only be set on a per-volume basis, not per qtree. You probably wouldn't put your Oracle tables (automatic snapshots not terribly useful) on the same volume as your user's home directories (keep lots of snapshots around).
Second, you may want to split your drives into multiple volumes so if you do have a RAID failure on one, the others are still available. You shouldn't put your Oracle tables and its redo logs on the same volume... lose that volume, and you've lost everything. Tables on one volume, logs on another (or keep a one copy on local disk, or on an entirely separate filer). However, you may not want to store your mail spool index database on one volume and your mail spool message store database on another, since you can't really live with one without the other.
I also try to keep volumes aligned with physical disk shelves. If a shelf dies, it only affects one volume and the rest continue to chug away. Minimum volume size is a full shelf... I don't have any volumes that are less than 5 data disks wide, to keep the parity:data ratio down.