Hey guys,
In Ontap 8, we used to route this message to a group of operators, who would move volumes around when aggregates started getting full.
I don't see the associated message in the Ontap 9 message catalog, or anything that looks similar.
Any ideas?
Mike> In Ontap 8, we used to route this message to a group of Mike> operators, who would move volumes around when aggregates started Mike> getting full.
I assume you got this message from trolling the message log?
Mike> I don't see the associated message in the Ontap 9 message Mike> catalog, or anything that looks similar.
I'd probably just script something out that does a show aggr and then parses the numbers and alerts on them. 'dashboard storage show' might also be something to look at, but much harder to parse and work with.
Do you run with a bunch of small aggregates, or much fewer, larger aggregates? I personally like monster aggregates, so I don't have to move volumes very often if at all.
But I do have some smaller dedicated aggregates for some Oracle DBs which need the dedicated IOPs. This is still on a 7-mode 8.x system which is running out of steam...
John
On Sat, Jun 9, 2018 at 7:41 AM, John Stoffel john@stoffel.org wrote:
Mike> In Ontap 8, we used to route this message to a group of Mike> operators, who would move volumes around when aggregates started Mike> getting full.
I assume you got this message from trolling the message log?
yeah, that message has been around since the GX days I believe. sucks that it's gone in Ontap 9.
Mike> I don't see the associated message in the Ontap 9 message
Mike> catalog, or anything that looks similar.
I'd probably just script something out that does a show aggr and then parses the numbers and alerts on them. 'dashboard storage show' might also be something to look at, but much harder to parse and work with.
yeah looks like will have to hack something together, would be great if the filer could tell you when it's running low on space though, seems like a pretty fundamental metric to be able to alert on.
Do you run with a bunch of small aggregates, or much fewer, larger aggregates? I personally like monster aggregates, so I don't have to move volumes very often if at all.
we thin provision all of our volumes, as it's difficult to forecast which ones will get large, and we don't want to have a lot of empty disk space wasted.
this means that aggregate used is fairly constantly rising, so once an aggr gets around 90% full, we start looking to vol move stuff elsewhere.
But I do have some smaller dedicated aggregates for some Oracle DBs which need the dedicated IOPs. This is still on a 7-mode 8.x system which is running out of steam...
John
ah looks like aggr nearly full messages are now lumped in with volume nearly full messages in Ontap 9, e.g:
*Message: monitor.volume.nearlyFull: Aggregate prod4a is nearly full (using or reserving 95% of space and 0% of inodes). * So I guess they are in there after all - that sorts me out.
On Mon, Jun 11, 2018 at 9:59 AM, Mike Thompson mike.thompson@gmail.com wrote:
On Sat, Jun 9, 2018 at 7:41 AM, John Stoffel john@stoffel.org wrote:
Mike> In Ontap 8, we used to route this message to a group of Mike> operators, who would move volumes around when aggregates started Mike> getting full.
I assume you got this message from trolling the message log?
yeah, that message has been around since the GX days I believe. sucks that it's gone in Ontap 9.
Mike> I don't see the associated message in the Ontap 9 message
Mike> catalog, or anything that looks similar.
I'd probably just script something out that does a show aggr and then parses the numbers and alerts on them. 'dashboard storage show' might also be something to look at, but much harder to parse and work with.
yeah looks like will have to hack something together, would be great if the filer could tell you when it's running low on space though, seems like a pretty fundamental metric to be able to alert on.
Do you run with a bunch of small aggregates, or much fewer, larger aggregates? I personally like monster aggregates, so I don't have to move volumes very often if at all.
we thin provision all of our volumes, as it's difficult to forecast which ones will get large, and we don't want to have a lot of empty disk space wasted.
this means that aggregate used is fairly constantly rising, so once an aggr gets around 90% full, we start looking to vol move stuff elsewhere.
But I do have some smaller dedicated aggregates for some Oracle DBs which need the dedicated IOPs. This is still on a 7-mode 8.x system which is running out of steam...
John
I would recommend to use OnCommand Unified Manager (OCUM) for capacity monitoring since NetApp removed most capacity EMS events. OCUM has all the needed events like "Aggregate Space Nearly Full", "Aggregate Days Until Full", "Aggregate Growth Rate Abnormal", etc. (same with volume of course) and let's you report those via email, SNMP, etc. You can also use the latest version which over time got quite some juicy GUI, and still monitor ONTAP systems back to 8.2 if I remember correctly.
As far as I know it's "free of charge" if you own an ONTAP system and no extra license is needed.
If you really only monitor this EMS event to check for available space on your aggrs you might react to late if there is too much growth...
/NEUE NACHRICHT
Oliver Gill Junior System Engineer Advanced UniByte oliver.gill@au.demailto:oliver.gill@au.de
________________________________ Von: toasters-bounces@teaparty.net [toasters-bounces@teaparty.net]" im Auftrag von "Mike Thompson [mike.thompson@gmail.com] Gesendet: Montag, 11. Juni 2018 20:42 An: John Stoffel Cc: toasters@teaparty.net Lists Betreff: Re: mgmtgwd.aggregate.used.rising
ah looks like aggr nearly full messages are now lumped in with volume nearly full messages in Ontap 9, e.g:
Message: monitor.volume.nearlyFull: Aggregate prod4a is nearly full (using or reserving 95% of space and 0% of inodes).
So I guess they are in there after all - that sorts me out.
On Mon, Jun 11, 2018 at 9:59 AM, Mike Thompson <mike.thompson@gmail.commailto:mike.thompson@gmail.com> wrote: On Sat, Jun 9, 2018 at 7:41 AM, John Stoffel <john@stoffel.orgmailto:john@stoffel.org> wrote:
Mike> In Ontap 8, we used to route this message to a group of Mike> operators, who would move volumes around when aggregates started Mike> getting full.
I assume you got this message from trolling the message log?
yeah, that message has been around since the GX days I believe. sucks that it's gone in Ontap 9.
Mike> I don't see the associated message in the Ontap 9 message Mike> catalog, or anything that looks similar.
I'd probably just script something out that does a show aggr and then parses the numbers and alerts on them. 'dashboard storage show' might also be something to look at, but much harder to parse and work with.
yeah looks like will have to hack something together, would be great if the filer could tell you when it's running low on space though, seems like a pretty fundamental metric to be able to alert on.
Do you run with a bunch of small aggregates, or much fewer, larger aggregates? I personally like monster aggregates, so I don't have to move volumes very often if at all.
we thin provision all of our volumes, as it's difficult to forecast which ones will get large, and we don't want to have a lot of empty disk space wasted.
this means that aggregate used is fairly constantly rising, so once an aggr gets around 90% full, we start looking to vol move stuff elsewhere.
But I do have some smaller dedicated aggregates for some Oracle DBs which need the dedicated IOPs. This is still on a 7-mode 8.x system which is running out of steam...
John
Advanced UniByte GmbH Hauptsitz Metzingen: Paul-Lechler-Straße 8 | 72555 Metzingen HRB 352782 | Amtsgericht Stuttgart | Geschäftsführer: Sandro Walker Telefon: +49 7123 9542-0 | Fax: +49 7123 9542-3-100 | info@au.demailto:info@au.de | www.au.dehttp://www.au.de Niederlassung Freiburg: Kronenstraße 34 | 79211 Denzlingen Niederlassung München: Industriestraße 31 | 82194 Gröbenzell Diese E-Mail ist nur für den Empfänger bestimmt und kann vertrauliche oder rechtlich geschützte Informationen enthalten, deren Kopieren, Weitergabe an Dritte oder sonstige Verwendung untersagt ist. Wenn Sie nicht der richtige Empfänger sind, unterrichten Sie uns bitte umgehend telefonisch oder per E-Mail und löschen Sie diese E-Mail. Vielen Dank!
You guys are aware of the Aggregate Auto Balance functionality?
It seems, that it would do automatically, what you did manually...:
autobalance aggregate config modify
autobalance aggregate show-aggregate-state
autobalance aggregate show-unbalanced-volume-state
cluster1::*> autobalance aggregate show-unbalanced-volume-state -instance Node Name: cluster-1-01 DSID of the Last Volume Queried: 1025 Aggregate: aggr_1 Name of the Volume: ro10 Last Time Threshold Crossed: 3/12/2014 16:20:18 Last Time Volume Was Moved: 3/11/2014 10:16:04 Is Volume Currently Moving: false Is Volume Quiesced: false Total Size of the Volume: 20.20MB Volume's Attributes: Over IOPS Threshold Stabilizing Last Time Volume State Was Checked: 3/13/2014 08:20:18 Node Name: cluster-1-01 DSID of the Last Volume Queried: 1026 Aggregate: aggr_1 Name of the Volume: test Last Time Threshold Crossed: 3/12/2014 16:20:18 Last Time Volume Was Moved: 3/11/2014 10:16:42 Is Volume Currently Moving: false Is Volume Quiesced: false Total Size of the Volume: 20.20MB Volume's Attributes: Over IOPS Threshold In Mirror Stabilizing Last Time Volume State Was Checked: 3/13/2014 08:20:18
At the diagnostic level, there are additional modifiable parameters. cluster1::*> autobalance aggregate config show Is the Auto Balance Aggregate Feature Enabled: false Mode of the Auto Balance Aggregate Feature: recommend Polling Interval: 3600 Threshold When Aggregate Is Considered Unbalanced (%): 70 Threshold When Aggregate Is Considered Balanced (%): 40 Volume Operations Threshold (IOPS): 100 Volume Operations Threshold Not Exceeded for Duration: 24 Volume Not Moved Again for Duration: 48
And on the volume:
volume modify Modify volume attributes...
[ -is-autobalance-eligible {true|false} ] - Is Eligible for Auto Balance Aggregate (privilege: advanced) If the Auto Balance feature is enabled, this parameter specifies whether the volume might be considered for system workload balancing. When set to true , the Auto Balance Aggregate feature might recommend moving this volume to another aggregate. The default value is true .
Just a suggestion...
Sebastian
On 18/06/09 4:41 PM, John Stoffel wrote:
Mike> In Ontap 8, we used to route this message to a group of Mike> operators, who would move volumes around when aggregates started Mike> getting full.
I assume you got this message from trolling the message log?
Mike> I don't see the associated message in the Ontap 9 message Mike> catalog, or anything that looks similar.
I'd probably just script something out that does a show aggr and then parses the numbers and alerts on them. 'dashboard storage show' might also be something to look at, but much harder to parse and work with.
Do you run with a bunch of small aggregates, or much fewer, larger aggregates? I personally like monster aggregates, so I don't have to move volumes very often if at all.
But I do have some smaller dedicated aggregates for some Oracle DBs which need the dedicated IOPs. This is still on a 7-mode 8.x system which is running out of steam...
John
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Yeah the aggregate auto-balance is something we looked at, and may fool with. In this case I'm also particularly interesting in just knowing when an aggregate is getting full.
We likely will end up using OCUM a bit more extensively, since all these alerts are more configurable there, vs. in the filers themselves.
On Wed, Jun 13, 2018 at 7:54 AM, Sebastian Goetze spgoetze@gmail.com wrote:
You guys are aware of the Aggregate Auto Balance functionality?
It seems, that it would do automatically, what you did manually...:
autobalance aggregate config modify
autobalance aggregate show-aggregate-state
autobalance aggregate show-unbalanced-volume-state
cluster1::*> autobalance aggregate show-unbalanced-volume-state -instance Node Name: cluster-1-01 DSID of the Last Volume Queried: 1025 Aggregate: aggr_1 Name of the Volume: ro10 Last Time Threshold Crossed: 3/12/2014 16:20:18 Last Time Volume Was Moved: 3/11/2014 10:16:04 Is Volume Currently Moving: false Is Volume Quiesced: false Total Size of the Volume: 20.20MB Volume's Attributes: Over IOPS Threshold Stabilizing Last Time Volume State Was Checked: 3/13/2014 08:20:18 Node Name: cluster-1-01 DSID of the Last Volume Queried: 1026 Aggregate: aggr_1 Name of the Volume: test Last Time Threshold Crossed: 3/12/2014 16:20:18 Last Time Volume Was Moved: 3/11/2014 10:16:42 Is Volume Currently Moving: false Is Volume Quiesced: false Total Size of the Volume: 20.20MB Volume's Attributes: Over IOPS Threshold In Mirror Stabilizing Last Time Volume State Was Checked: 3/13/2014 08:20:18
At the diagnostic level, there are additional modifiable parameters. cluster1::*> autobalance aggregate config show Is the Auto Balance Aggregate Feature Enabled: false Mode of the Auto Balance Aggregate Feature: recommend Polling Interval: 3600 Threshold When Aggregate Is Considered Unbalanced (%): 70 Threshold When Aggregate Is Considered Balanced (%): 40 Volume Operations Threshold (IOPS): 100 Volume Operations Threshold Not Exceeded for Duration: 24 Volume Not Moved Again for Duration: 48
And on the volume:
volume modify Modify volume attributes...
[ -is-autobalance-eligible {true|false} ] - Is Eligible for Auto Balance Aggregate (privilege: advanced) If the Auto Balance feature is enabled, this parameter specifies whether the volume might be considered for system workload balancing. When set to true , the Auto Balance Aggregate feature might recommend moving this volume to another aggregate. The default value is true .
Just a suggestion...
Sebastian
On 18/06/09 4:41 PM, John Stoffel wrote:
Mike> In Ontap 8, we used to route this message to a group of Mike> operators, who would move volumes around when aggregates started Mike> getting full.
I assume you got this message from trolling the message log?
Mike> I don't see the associated message in the Ontap 9 message Mike> catalog, or anything that looks similar.
I'd probably just script something out that does a show aggr and then parses the numbers and alerts on them. 'dashboard storage show' might also be something to look at, but much harder to parse and work with.
Do you run with a bunch of small aggregates, or much fewer, larger aggregates? I personally like monster aggregates, so I don't have to move volumes very often if at all.
But I do have some smaller dedicated aggregates for some Oracle DBs which need the dedicated IOPs. This is still on a 7-mode 8.x system which is running out of steam...
John
Toasters mailing listToasters@teaparty.nethttp://www.teaparty.net/mailman/listinfo/toasters