10% overhead for WAFL,
FYI.
As to what Glenn says,
I agree – start with storage space needed, then figure performance. Don’t
fill the remaining volume beyond 90% if possible (unless it’s all archive-type
data).
As for your question
before:
If you use RaidDP,
you’ll lose 2 disks for every raid group (typically 16 disks total, or
14D+2P)
The disks are
right-sized (to ensure that every manufacturer has disks with the same size,
start to end – ie,
You’re left with # data
disks * disk size, MINUS 10% WAFL overhead.
After that, factor in
snapshot reserve (20% by default) and space reservations if
needed.
In the case of 320GB
ATA disks, the right size is actually 274400 MB (the 320GB is a raw or
unformatted capacity – you can thank the ATA drive manufacturers for that
misleading info). A RG of size 16 would net you 3457440MB after 10%
overhead. If you don’t need snapshots, that’s your usable space – if you
do need snapshot reserve, just subtract that from the
total.
Using LUNs (especially
with snapdrive) changes the rules because of the 2x overhead it enforces.
This is actually changed in with something called fractional reserve (snapdrive
supports fractional reserve with SD 4.0 and ONTAP
7.1)
The reason for all of
the overhead: data protection.
Netapp doesn’t sell
space – they sell insurance. They sell protection from data loss FIRST,
then performance\manageability, and lastly space.
I would say that NetApp
has no more overhead than any other vendor that has snapshot like capabilities
(and probably less given that everyone else is a copy-on-write implementation
thusfar).
Glenn
From:
owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Glenn Dekhayser
Sent: Wednesday, July 26, 2006 11:11
PM
To:
margesimpson@hushmail.com
Cc:
toasters@mathworks.com
Subject:
RE: Storage space overheads!
The most overused
sentence: "It depends".
When I design Netapp solutions, I
work backwards from how much usable storage I'm going to need for the next 12-18
months. Then I also determine what kind of aggregate IOPS I'm going to
need from my disks.
From that, it's a fairly simple
calculation. OK, so it's not; I have a complicated excel spreadsheet that
I worked on for about a month before I was confident the answers were close
enough to use in designs; but before you start putting everything together
capacity wise you should have a good idea of how many disks you're going to need
to satisfy your performance requirements.
There's the obvious parity or
dual-parity overhead, and the hot spare (or multiple hot spares, depending on
how many disks you've got in the system).
There's the snapshot reserve for NAS
volumes (20% by default, you may need more or less)- but that reserve depends
highly upon the amount of changes and deletes you have in a given volume.
If you are using LUNs with snapshots, you need to multiple the size of the LUNs
by 2.2 (2x for the overwrite reserve and another 20% for the internal data
change rates inside a given lun, again change for your own
environment).
There's the WAFL RAID overhead; I've
never gotten a real good feel for that but let's call it 5%- (anyone care to
amend that?)
Now also keep in mind that WAFL
volumes don't really like to be more than 90% full because of the way they lay
down data; I like to keep mine at 80% or lower.
That's about all the overheads I can
think of. Sound like a lot? Any vendor with snapshot technology is
going to have the same issue of reserving loads of space for it, and they will
use raid-5, which requires a net loss of one disk every 5-8 disks (and be slow
under write load). Netapp goes 14 disks per parity (in either raid4 or
dp). And everyone with a hardware raid has some raid
overhead. Unless you're using raid 10, but then you're buying twice
the disk, aren't you?
Bottom line is that Netapp does
require you to invest some of your disk in data and physical availability.
It's well worth it, and it's comparable to every other enterprise system out
there. If all you need is dumb disk with no overhead and no avanced
features, there are plenty of RAID-0 solutions out there that will FC
connect.
Glenn (the other
one)
From:
owner-toasters@mathworks.com on behalf of margesimpson@hushmail.com
Sent: Wed 7/26/2006 9:38 PM
To: toasters@mathworks.com
Subject: Storage space
overheads!
Hi all:
Can anyone please give me total NetApp
overheads including, file
systems, aggr reserve, snap reserve, wafl overhead,
parity disks
(raid_dp), etc, etc.
say, 10 x 100GB = 1000GB total.
what
usable space should i finally get after all those overheads.
Can anyone
give me proper figures/ math and proper break up of
above figure?
I
heard NetApp solution has lot of disk overheads!
Thank you in
advance.
Marge.
Concerned about your privacy? Instantly send
FREE secure email, no account required
http://www.hushmail.com/send?l=480
Get
the best prices on SSL certificates from Hushmail
https://www.hushssl.com?l=485