Agreed, the threshholds definitely depend on the workload. I've got one banking customer with a database environment pushing 400MB/sec of redo logging where every microsecond of latency counts. They keep the capacity capped at 85% to avoid problems.
On the other hand, when I worked at my prior employer, a well-known database and application company located somewhere near Palo Alto, we had NetApp systems that didn't even have a problem at 98% capacity because the workloads were almost entirely random reads.
-----Original Message-----
From:
toasters-bounces@teaparty.net [mailto:
toasters-bounces@teaparty.net] On Behalf Of Michael Bergman
Sent: Friday, November 06, 2015 12:06 AM
To: Toasters
Subject: Re: Completely filling and aggregate
Jeffrey Steiner wrote:
> The rule I usually use is this:
> [...]
>
> 4) You'll probably start seeing slowdowns as approach 95%.
That threshold depends on the workload -- among a bunch of other things the level of random overwrites -- if you're using dedup or not, etc.
Baiscally: what is the workload doing to your free space in the Aggr, and can free_space_realloc [on | no_redirect] hold it nice and clean? If not...
More often than not you'll see slowdown, especially for W (higher latency, and/or spikes) long before you reach 95%. More like 85+ somewhere, I'd say.
Sure, I have very heavy nasty NFSv3 workload here, most ppl prob will never see such stuff, but >90% here is a really really bad idea
Need to aim for having it around 80% max. Trying to do a reallocate -A with less than that avail in an Aggr isn't pleasant trust me
/M
_______________________________________________
Toasters mailing list
Toasters@teaparty.nethttp://www.teaparty.net/mailman/listinfo/toasters