In the battle to maximize investment (usable disk space) -vs- risk (data
loss), try to understand how much downtime you can -afford-.
You will have backups, contingencies..etc, but try to math out how much
downtime your enterprise can afford (because there will ALWAYS be some
form of downtime as time goes on), and design the system to fall within
that cost.
That is usually easier than trying to understand how much uptime you
want to buy, because you always want the MOST uptime, but you soon
realize you cant afford it.
Once you have that, its pretty simple to size a system to match it.
-----Original Message-----
From: owner-toasters(a)mathworks.com
[mailto:owner-toasters@mathworks.com]On Behalf Of Battle, Evan (CBC)
Sent: Tuesday, March 22, 2005 1:09 PM
To: Rob Winters; toasters(a)mathworks.com
Subject: RE: ONTAP 7.0.0.1, aggregates, and flexvols
If you are worried about this scenario, use smaller raid groups.
Evan
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Rob Winters
Sent: Tuesday, March 22, 2005 1:45 PM
To: toasters(a)mathworks.com
Subject: ONTAP 7.0.0.1, aggregates, and flexvols
I'm catching up on the 6-week-old thread on this topic, and wondered
where anyone is with deployment, stability, etc.
It sounded like everyone in that thread went with my initial instinct of
"make the aggregate as big as you can, and stuff it with flexvols". I'm
wondering if that's the smart thing to do in a real-world scenario, or
if there isn't some "middle way".
If three disk fail in any one RAID-DP in the aggregate, or if two disks
fail and the operator accidentally yanks out a third disk while trying
to yank one of the first two, or (insert nightmare scenario here), then
it's tape restore time for *every flexvol in the aggregate*, isn't it?
It's an extreme long shot with RAID-DP, but a very bad outcome if you
hit that particular lottery.
I'm trying to decide how to think about that. Maybe divide up shares
into different functional groups, or by space utilization, and do three
aggregates instead of one? Still lots of space flexibility, but a bad
raid only takes down a third of the Universe instead of the whole thing.
Same issue in choosing the "sweet spot" for RAID size. I have 12 shelves
in the FAS960, and I'm sure I want to minimize disks from the same RAID
sharing shelves. One is ideal, two is tolerable, and three is "right
out".
Thoughts from smart folks appreciated, especially smart folks with
working implementations. ;-)
/// Rob
Confidentiality Notice:
This e-mail is intended only for the personal and confidential use of
the individual to whom it is addressed and may contain information that
is privileged, confidential and protected by law. If you are not the
intended recipient, you are hereby notified that any use or disclosure
of this information is strictly prohibited. If you have received this
message in error, please notify the sender immediately by reply e-mail
and delete the original message. Your compliance is appreciated.