I generally use 9-disk raid groups. it's a decent balance between rebuild time, pairity overhead (space used by parity instead of user data), and the math works: a ds14 has 14 spindles, 9 x 3 == 27 spindles, leaving 3 raid groups and one spare for every two ds14 shelves installed.
as a result, I always try to buy spindles two full shelves at a time. each shelf in a pair always goes on seperate FC loops.
on nearstore filers, I use raid DP, and the default raidgroup size.
on non-nearstore filers, where a double disk failure would cause business interruption, I also use raid DP. e.g. oracle volumes.
hope that helps.
-skottie
Tavis Gustafson wrote:
I usually setup my filers with raidsize=7 having no more than 7 disks per volume. However, I would like create a raidsize=13 volume. The only risk i can glean from NOW is that rebuild time will increase and thus increase the chances of a second drive failure (during rebuild).
Has anyone had any real world issues with using volumes of raidsize > 10
thanks, -Tavis