Actually, the aggregate snap reserve is by default only 5%, not 10%.
I have never seen a volume go over 100% usage, in fact the filer yells at you and could take the volume offline if it can't write data. Are you sure you're not talking about consumption of the snap reserve, which CAN go over 100%?
Glenn
"It's worth remembering that ONTAP already imposes a 10% reserve on the filing system size (including snapshot reserve); or on the sum of sizes, for flexible volumes in an aggregate. A "100% full" tradvol/aggregate is actually only using 90% of the blocks on the discs. Or looked at another way, the difference in congestion between "90% full" and "100% full" is 2:1, not infinity:1.:
"Blake Golliher" thelastman@gmail.com writes:
A filer can be written to over 100% space utilization, it'll just keep growing. I've seen filers go up tot 107% space utilization before.
and "George, Andrew" georgea@anz.com replies:
Interesting We run a fair few filers in the 97%-99% arena, mainly as CIFS NAS
devices.
Every time I've seen it hit 100% 0 bytes available CIFS has refused to
save anything.
That's my experience, with NFS. I would be interested (from a theoretical point of view!) to know how Blake gets the space utilisation that high. (Snapshot usage vs reserve as shown by "df" can go over 100%, of course.)
Actually, the aggregate snap reserve is by default only 5%, not 10%.
Somewhat tangental:
Does anyone here use aggregate snap reserves ? I was under the impression that is was only useful for sync mirror setups... So I usually just shut it off to take advantage of the extra space.
Regards, Max
Max wrote:
Does anyone here use aggregate snap reserves ? I was under the impression that is was only useful for sync mirror setups... So I usually just shut it off to take advantage of the extra space.
We set aggr snap reserves to 0% as you said, I believe they are only needed for sync mirror and otherwise can be disabled.
-skottie
"Glenn Dekhayser" gdekhayser@voyantinc.com writes:
Actually, the aggregate snap reserve is by default only 5%, not 10%.
... which is nothing to do with the 10% "reserve" I was talking about, which is at a lower level. See below.
"Max" slinkywizard@integraonline.com adds:
Somewhat tangental:
Does anyone here use aggregate snap reserves ? I was under the impression that is was only useful for sync mirror setups... So I usually just shut it off to take advantage of the extra space.
Whatever NetApp say, I've never seen the point of reserving space for aggregate snapshots if you don't _use_ aggregate snapshots for anything. We don't, so
snap reserve -A [aggrname] 0 snap sched -A [aggrname] 0
However, for pedagogic purposes, I've set it back to 5% temporarily for the example below. :-)
carina> df -A main Aggregate kbytes used avail capacity main 595174120 575814948 19359172 97% main/.snapshot 31324952 0 31324952 0%
That's a total of 626499072 KB, _including_ the snapshot reserve.
carina> aggr status main -r Aggregate main (online, raid_dp) (block checksums) Plex /main/plex0 (online, normal, active) RAID group /main/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 8b.24 8b 1 8 FC:A - FCAL 10000 68000/139264000 68552/140395088 parity 8a.27 8a 1 11 FC:B - FCAL 10000 68000/139264000 68552/140395088 data 8b.28 8b 1 12 FC:A - FCAL 10000 68000/139264000 68552/140395088 data 8a.29 8a 1 13 FC:B - FCAL 10000 68000/139264000 68552/140395088 data 8a.25 8a 1 9 FC:B - FCAL 10000 68000/139264000 68552/140395088 data 8b.22 8b 1 6 FC:A - FCAL 10000 68000/139264000 68552/140395088 data 8b.16 8b 1 0 FC:A - FCAL 10000 68000/139264000 68552/140395088 data 8a.17 8a 1 1 FC:B - FCAL 10000 68000/139264000 68552/140395088 data 8b.18 8b 1 2 FC:A - FCAL 10000 68000/139264000 68552/140395088 data 8a.19 8a 1 3 FC:B - FCAL 10000 68000/139264000 68552/140395088 data 8a.21 8a 1 5 FC:B - FCAL 10000 68000/139264000 68552/140395088 data 8a.23 8a 1 7 FC:B - FCAL 10000 68000/139264000 68552/140395088
After the right sizing of the discs to 68000 MB = 69632000 KB, taking off 20.5 MB = 20992 KB (the reserved area at the start of each disc: it's been that ever since NetApp was hatched from the cosmic egg) that's a total of 10 (data discs) x 69611008 KB (per disc) = 696110080 KB.
The space actually made available in the aggregate to flexible volumes, their snapshots, and aggregate snapshots, old Uncle Tom Cobley and all, is 90% of that, i.e. 696110080 * 0.9 = 626499072 KB.
[It would have made a better lesson if 90% of 10 data discs hadn't equalled exactly 9 data discs! Sorry about that, you'll just have to count them carefully ...]
Exactly the same (hidden) reserve applies in a traditional volume.
[Oh, and to you guys over in the "Aggregate size question" thread, stop quoting all your figures in GB and to 2+ sig figs, as it's imposible to make anything add up if you do that. Real Programmers aren't scared of 10-figure numbers :-) ]