I'm catching up on the 6-week-old thread on this topic, and wondered where anyone is with deployment, stability, etc.
It sounded like everyone in that thread went with my initial instinct of "make the aggregate as big as you can, and stuff it with flexvols". I'm wondering if that's the smart thing to do in a real-world scenario, or if there isn't some "middle way".
If three disk fail in any one RAID-DP in the aggregate, or if two disks fail and the operator accidentally yanks out a third disk while trying to yank one of the first two, or (insert nightmare scenario here), then it's tape restore time for *every flexvol in the aggregate*, isn't it? It's an extreme long shot with RAID-DP, but a very bad outcome if you hit that particular lottery.
I'm trying to decide how to think about that. Maybe divide up shares into different functional groups, or by space utilization, and do three aggregates instead of one? Still lots of space flexibility, but a bad raid only takes down a third of the Universe instead of the whole thing. Same issue in choosing the "sweet spot" for RAID size. I have 12 shelves in the FAS960, and I'm sure I want to minimize disks from the same RAID sharing shelves. One is ideal, two is tolerable, and three is "right out".
Thoughts from smart folks appreciated, especially smart folks with working implementations. ;-)
/// Rob