Hi all: Can anyone please give me total NetApp overheads including, file systems, aggr reserve, snap reserve, wafl overhead, parity disks (raid_dp), etc, etc. say, 10 x 100GB = 1000GB total. what usable space should i finally get after all those overheads.
Can anyone give me proper figures/ math and proper break up of above figure?
I heard NetApp solution has lot of disk overheads!
Thank you in advance. Marge.
Concerned about your privacy? Instantly send FREE secure email, no account required http://www.hushmail.com/send?l=480
Get the best prices on SSL certificates from Hushmail https://www.hushssl.com?l=485
The most overused sentence: "It depends".
When I design Netapp solutions, I work backwards from how much usable storage I'm going to need for the next 12-18 months. Then I also determine what kind of aggregate IOPS I'm going to need from my disks.
From that, it's a fairly simple calculation. OK, so it's not; I have a complicated excel spreadsheet that I worked on for about a month before I was confident the answers were close enough to use in designs; but before you start putting everything together capacity wise you should have a good idea of how many disks you're going to need to satisfy your performance requirements.
There's the obvious parity or dual-parity overhead, and the hot spare (or multiple hot spares, depending on how many disks you've got in the system).
There's the snapshot reserve for NAS volumes (20% by default, you may need more or less)- but that reserve depends highly upon the amount of changes and deletes you have in a given volume. If you are using LUNs with snapshots, you need to multiple the size of the LUNs by 2.2 (2x for the overwrite reserve and another 20% for the internal data change rates inside a given lun, again change for your own environment).
There's the WAFL RAID overhead; I've never gotten a real good feel for that but let's call it 5%- (anyone care to amend that?)
Now also keep in mind that WAFL volumes don't really like to be more than 90% full because of the way they lay down data; I like to keep mine at 80% or lower.
That's about all the overheads I can think of. Sound like a lot? Any vendor with snapshot technology is going to have the same issue of reserving loads of space for it, and they will use raid-5, which requires a net loss of one disk every 5-8 disks (and be slow under write load). Netapp goes 14 disks per parity (in either raid4 or dp). And everyone with a hardware raid has some raid overhead. Unless you're using raid 10, but then you're buying twice the disk, aren't you?
Bottom line is that Netapp does require you to invest some of your disk in data and physical availability. It's well worth it, and it's comparable to every other enterprise system out there. If all you need is dumb disk with no overhead and no avanced features, there are plenty of RAID-0 solutions out there that will FC connect.
Glenn (the other one)
________________________________
From: owner-toasters@mathworks.com on behalf of margesimpson@hushmail.com Sent: Wed 7/26/2006 9:38 PM To: toasters@mathworks.com Subject: Storage space overheads!
Hi all: Can anyone please give me total NetApp overheads including, file systems, aggr reserve, snap reserve, wafl overhead, parity disks (raid_dp), etc, etc. say, 10 x 100GB = 1000GB total. what usable space should i finally get after all those overheads.
Can anyone give me proper figures/ math and proper break up of above figure?
I heard NetApp solution has lot of disk overheads!
Thank you in advance. Marge.
Concerned about your privacy? Instantly send FREE secure email, no account required http://www.hushmail.com/send?l=480
Get the best prices on SSL certificates from Hushmail https://www.hushssl.com?l=485
On Wed, Jul 26, 2006 at 11:11:28PM -0400, Glenn Dekhayser wrote:
The most overused sentence: "It depends".
There's the WAFL RAID overhead; I've never gotten a real good feel for that but let's call it 5%- (anyone care to amend that?)
The RAID itself doesn't give any extra overhead beyond the parity disks.
WAFL of course has some overhead like any other filesystem depending on the number and size of files and directories.
In my experience (many small files about 11k on average, e.g. a html store) the prominent figure is the blocksize of 4K. The EMC NAS servers used 16k (i.e. significantly more waste). Solaris UFS used 8k but allows for 1k fragments (i.e. a bit more efficient than WAFL).
If you use it for large database files there is probably no significant difference.
Greetings,