Yes, very annoying. Getting these alerts for node management lifs, snapmirror lifs (built on 4 phys ports), etc.
We're confident in our redundancy and don't want to change things just to make bogus alert emails stop.
Wondering if there's a way in 9.1 to manage these alerts coming from the nodes. Currently getting inundated with failed login attempt emails as infosec does their vulnerability scans.
Sent from my iPhone
On Apr 20, 2017, at 7:40 AM, TAYLOR DANIEL dantaylor@ntlworld.com wrote:
Hello,
Since the dawn of its inception by another third party we have been getting these messages for both intercluster LIFs:
Filer: ntap-a
Time: Sun, Apr 16 00:15:05 2017 +0100
Severity: LOG_ALERT
Message: vifmgr.lifs.noredundancy: No redundancy in the failover configuration for 1 LIFs assigned to node “ntap-a”. LIFs:
uk:ntap_a_intercluster
Description: This message occurs when one or more logical interfaces (LIFs) are configured to use a failover policy that implies failover to one or more ports but have no failover targets beyond their home ports. If any affected home port or home node is offline or unavailable, the corresponding LIFs will be operationally down and unable to serve data.
Action: Add additional ports to the broadcast domains or failover groups used by the affected LIFs, or modify each LIF's failover policy to include one or more nodes with available failover targets. For example, the “broadcast-domain-wide”
failover policy will consider all failover targets in a LIF's failover group.
Use the “network interface show -failover” command to review the currently assigned failover targets for each LIF.
Source: vifmgr
Index: 6740328
[Note: This email message is sent using a deprecated event routing mechanism.
For information, search the knowledgebase of the NetApp support web site for "convert existing event configurations in Data ONTAP 9.0."]
This intercluster LIF is part of an IFGRP which is made up of two vlanned ports, so by definition is redundant.
My question, is this a bug in the error reporting or is there some configuration which is not supported here?
We have another cluster which is configured with standard vlanned ports per node as opposed to ifgrps and this doesn’t seem to complain. Just not sure if this is a problem or not, or if it’s something we can suppress?
Running FAS8040, 9.1P2.
Thanks
Dan
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters