Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
--------------- --------------- { n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
--------------- --------------- { s1 } { s2 }
\ /
\ /
--------------- { host }
Hi! By default, ESX accesses LUNs via the first path discovered. In half of the cases, this will be through the partner head, which is non-optimal. The ESX Host Utilities have a script called config_mpath, which is mainly a wrapper for esxcfg-mpath, that sets paths correctly. I recommend you download, read the docs, and install the EHU on each of your ESX hosts. http://now.netapp.com/NOW/download/software/sanhost_esx/ESX/
Share and enjoy!
Peter
________________________________
From: Linux Admin [mailto:sysadmin.linux@gmail.com] Sent: Wednesday, March 04, 2009 11:38 AM To: NetApp Toasters List Subject: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
--------------- --------------- { n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
--------------- --------------- { s1 } { s2 }
\ /
\ /
--------------- { host }
I wanted to thank everyone for their great suggestion on vmware and solaris. My only other question to the group is in regard to esx 3i. What to do with ESX 3i?
On Wed, Mar 4, 2009 at 2:59 PM, Learmonth, Peter <Peter.Learmonth@netapp.com
wrote:
Hi! By default, ESX accesses LUNs via the first path discovered. In half of the cases, this will be through the partner head, which is non-optimal. The ESX Host Utilities have a script called config_mpath, which is mainly a wrapper for esxcfg-mpath, that sets paths correctly. I recommend you download, read the docs, and install the EHU on each of your ESX hosts. http://now.netapp.com/NOW/download/software/sanhost_esx/ESX/
Share and enjoy!
Peter
*From:* Linux Admin [mailto:sysadmin.linux@gmail.com] *Sent:* Wednesday, March 04, 2009 11:38 AM *To:* NetApp Toasters List *Subject:* FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
{ n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
{ s1 } { s2 }
\ / \ / --------------- { host }
Found the KB: https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb44784
On Wed, Mar 18, 2009 at 2:34 PM, Linux Admin sysadmin.linux@gmail.comwrote:
I wanted to thank everyone for their great suggestion on vmware and solaris. My only other question to the group is in regard to esx 3i. What to do with ESX 3i?
On Wed, Mar 4, 2009 at 2:59 PM, Learmonth, Peter < Peter.Learmonth@netapp.com> wrote:
Hi! By default, ESX accesses LUNs via the first path discovered. In half of the cases, this will be through the partner head, which is non-optimal. The ESX Host Utilities have a script called config_mpath, which is mainly a wrapper for esxcfg-mpath, that sets paths correctly. I recommend you download, read the docs, and install the EHU on each of your ESX hosts. http://now.netapp.com/NOW/download/software/sanhost_esx/ESX/
Share and enjoy!
Peter
*From:* Linux Admin [mailto:sysadmin.linux@gmail.com] *Sent:* Wednesday, March 04, 2009 11:38 AM *To:* NetApp Toasters List *Subject:* FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
{ n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
{ s1 } { s2 }
\ / \ / --------------- { host }
I'm a little foggy on the details... it's been a while since I read them or thought about it. But, assuming you mean single_image mode, host can see the luns on all four ports, and data can be accessed through all four ports. Now N1 may own the lun, but it can be accessed "through the interconnect" through N2.
Multipathing on the host determines which path will be used to access data, and whether or not the an active/passive or round robin (or whatever). If you are using Netapp multipathing, it knows to use only the N1 path. But other multipathing doesn't understand this, and will pick the primary path at random.
I have tons of VMWare hosts that have seemless esx multipathing, but pick the wrong paths by themselves. Then you get the error. I get this error all the time and really haven't had a problem with it... but it's the first thing support will bring up when you call about anything. I have noticed that it seems to run the CPU a little high (since it has to swap things in and out of memory through the interconnect).
Fred
________________________________ From: Linux Admin sysadmin.linux@gmail.com To: NetApp Toasters List toasters@mathworks.com Sent: Wednesday, March 4, 2009 2:38:19 PM Subject: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
--------------- --------------- { n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
--------------- --------------- { s1 } { s2 }
\ /
\ /
--------------- { host }
Accessing a LUN across a partner path will function fine ... and in some cases, there may be no problems perceived. However, IOs across the the partner path do not perform as well under load than access to the primary paths. For ESX, proper path prioritization is very easy to accomplish with the scripts provided in the Host Utilities Kit.
As for the original issue on this thread ... assuming you have multipathing set up correctly, the warning may be associated with normal path checking activity. Check your lun stats data and divide the "partner kb" by "partner ops", if the answer there is ~512b ... then these warnings could simply be path checking done by multipathd. If the average IO size seems significant, open a case with support and get some assistance in determining why IO is going over the partner path.
-- errol
________________________________
From: Fred Grieco [mailto:fredgrieco@yahoo.com] Sent: Wednesday, March 04, 2009 2:21 PM To: Linux Admin; NetApp Toasters List Subject: Re: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I'm a little foggy on the details... it's been a while since I read them or thought about it. But, assuming you mean single_image mode, host can see the luns on all four ports, and data can be accessed through all four ports. Now N1 may own the lun, but it can be accessed "through the interconnect" through N2.
Multipathing on the host determines which path will be used to access data, and whether or not the an active/passive or round robin (or whatever). If you are using Netapp multipathing, it knows to use only the N1 path. But other multipathing doesn't understand this, and will pick the primary path at random.
I have tons of VMWare hosts that have seemless esx multipathing, but pick the wrong paths by themselves. Then you get the error. I get this error all the time and really haven't had a problem with it... but it's the first thing support will bring up when you call about anything. I have noticed that it seems to run the CPU a little high (since it has to swap things in and out of memory through the interconnect).
Fred
________________________________
From: Linux Admin sysadmin.linux@gmail.com To: NetApp Toasters List toasters@mathworks.com Sent: Wednesday, March 4, 2009 2:38:19 PM Subject: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
--------------- --------------- { n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
--------------- --------------- { s1 } { s2 }
\ /
\ /
--------------- { host }
Thank you all for your help!
On Wed, Mar 4, 2009 at 2:52 PM, Fouquet, Errol Errol.Fouquet@netapp.comwrote:
Accessing a LUN across a partner path will function fine ... and in some cases, there may be no problems perceived. However, IOs across the the partner path do not perform as well under load than access to the primary paths. For ESX, proper path prioritization is very easy to accomplish with the scripts provided in the Host Utilities Kit.
As for the original issue on this thread ... assuming you have multipathing set up correctly, the warning may be associated with normal path checking activity. Check your lun stats data and divide the "partner kb" by "partner ops", if the answer there is ~512b ... then these warnings could simply be path checking done by multipathd. If the average IO size seems significant, open a case with support and get some assistance in determining why IO is going over the partner path.
-- errol
*From:* Fred Grieco [mailto:fredgrieco@yahoo.com] *Sent:* Wednesday, March 04, 2009 2:21 PM *To:* Linux Admin; NetApp Toasters List *Subject:* Re: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I'm a little foggy on the details... it's been a while since I read them or thought about it. But, assuming you mean single_image mode, host can see the luns on all four ports, and data can be accessed through all four ports. Now N1 may own the lun, but it can be accessed "through the interconnect" through N2.
Multipathing on the host determines which path will be used to access data, and whether or not the an active/passive or round robin (or whatever). If you are using Netapp multipathing, it knows to use only the N1 path. But other multipathing doesn't understand this, and will pick the primary path at random.
I have tons of VMWare hosts that have seemless esx multipathing, but pick the wrong paths by themselves. Then you get the error. I get this error all the time and really haven't had a problem with it... but it's the first thing support will bring up when you call about anything. I have noticed that it seems to run the CPU a little high (since it has to swap things in and out of memory through the interconnect).
Fred
*From:* Linux Admin sysadmin.linux@gmail.com *To:* NetApp Toasters List toasters@mathworks.com *Sent:* Wednesday, March 4, 2009 2:38:19 PM *Subject:* FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
{ n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
{ s1 } { s2 }
\ / \ / --------------- { host }
We wrote our own script that will fix the optimal path across all 12 esx servers, but over time they seem to switch but aren't sure what is causing the path to switch. We know the paths aren't going down and we are not exceeding the capacity of the ports?
Thanks. Jack
Linux Admin wrote:
Thank you all for your help!
On Wed, Mar 4, 2009 at 2:52 PM, Fouquet, Errol <Errol.Fouquet@netapp.com mailto:Errol.Fouquet@netapp.com> wrote:
Accessing a LUN across a partner path will function fine ... and in some cases, there may be no problems perceived. However, IOs across the the partner path do not perform as well under load than access to the primary paths. For ESX, proper path prioritization is very easy to accomplish with the scripts provided in the Host Utilities Kit. As for the original issue on this thread ... assuming you have multipathing set up correctly, the warning may be associated with normal path checking activity. Check your lun stats data and divide the "partner kb" by "partner ops", if the answer there is ~512b ... then these warnings could simply be path checking done by multipathd. If the average IO size seems significant, open a case with support and get some assistance in determining why IO is going over the partner path. -- errol ------------------------------------------------------------------------ *From:* Fred Grieco [mailto:fredgrieco@yahoo.com <mailto:fredgrieco@yahoo.com>] *Sent:* Wednesday, March 04, 2009 2:21 PM *To:* Linux Admin; NetApp Toasters List *Subject:* Re: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected. I'm a little foggy on the details... it's been a while since I read them or thought about it. But, assuming you mean single_image mode, host can see the luns on all four ports, and data can be accessed through all four ports. Now N1 may own the lun, but it can be accessed "through the interconnect" through N2. Multipathing on the host determines which path will be used to access data, and whether or not the an active/passive or round robin (or whatever). If you are using Netapp multipathing, it knows to use only the N1 path. But other multipathing doesn't understand this, and will pick the primary path at random. I have tons of VMWare hosts that have seemless esx multipathing, but pick the wrong paths by themselves. Then you get the error. I get this error all the time and really haven't had a problem with it... but it's the first thing support will bring up when you call about anything. I have noticed that it seems to run the CPU a little high (since it has to swap things in and out of memory through the interconnect). Fred ------------------------------------------------------------------------ *From:* Linux Admin <sysadmin.linux@gmail.com <mailto:sysadmin.linux@gmail.com>> *To:* NetApp Toasters List <toasters@mathworks.com <mailto:toasters@mathworks.com>> *Sent:* Wednesday, March 4, 2009 2:38:19 PM *Subject:* FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected. Hello Toasters. I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected. I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1 --------------- --------------- { n1 } { n2 } | \ / | | \ / | | \ / | | \ / | --------------- --------------- { s1 } { s2 } \ / \ / --------------- { host }
I solaris 10 as bad about this as ESX server? I also see the same issue with Solaris server
On Wed, Mar 4, 2009 at 6:33 PM, Jack Lyons jack1729@gmail.com wrote:
We wrote our own script that will fix the optimal path across all 12 esx servers, but over time they seem to switch but aren't sure what is causing the path to switch. We know the paths aren't going down and we are not exceeding the capacity of the ports?
Thanks. Jack
Linux Admin wrote:
Thank you all for your help!
On Wed, Mar 4, 2009 at 2:52 PM, Fouquet, Errol <Errol.Fouquet@netapp.commailto: Errol.Fouquet@netapp.com> wrote:
Accessing a LUN across a partner path will function fine ... and in some cases, there may be no problems perceived. However, IOs across the the partner path do not perform as well under load than access to the primary paths. For ESX, proper path prioritization is very easy to accomplish with the scripts provided in the Host Utilities Kit. As for the original issue on this thread ... assuming you have multipathing set up correctly, the warning may be associated with normal path checking activity. Check your lun stats data and divide the "partner kb" by "partner ops", if the answer there is ~512b ... then these warnings could simply be path checking done by multipathd. If the average IO size seems significant, open a case with support and get some assistance in determining why IO is going over the partner path. -- errol
*From:* Fred Grieco [mailto:fredgrieco@yahoo.com mailto:fredgrieco@yahoo.com] *Sent:* Wednesday, March 04, 2009 2:21 PM
*To:* Linux Admin; NetApp Toasters List *Subject:* Re: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I'm a little foggy on the details... it's been a while since I read them or thought about it. But, assuming you mean single_image mode, host can see the luns on all four ports, and data can be accessed through all four ports. Now N1 may own the lun, but it can be accessed "through the interconnect" through N2.
Multipathing on the host determines which path will be used to access data, and whether or not the an active/passive or round robin (or whatever). If you are using Netapp multipathing, it knows to use only the N1 path. But other multipathing doesn't understand this, and will pick the primary path at random.
I have tons of VMWare hosts that have seemless esx multipathing, but pick the wrong paths by themselves. Then you get the error. I get this error all the time and really haven't had a problem with it... but it's the first thing support will bring up when you call about anything. I have noticed that it seems to run the CPU a little high (since it has to swap things in and out of memory through the interconnect).
Fred
*From:* Linux Admin <sysadmin.linux@gmail.com mailto:sysadmin.linux@gmail.com> *To:* NetApp Toasters List <toasters@mathworks.com mailto:toasters@mathworks.com> *Sent:* Wednesday, March 4, 2009 2:38:19 PM *Subject:* FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
--------------- ---------------
{ n1 } { n2 }
| \ / | | \ / | | \ / | | \ / | --------------- ---------------
{ s1 } { s2 }
\ / \ / --------------- { host }
Solaris 10 U2 and later supports ALUA. NetApp also supports ALUA. Bingo! Enable ALUA on Solaris. Enable ALUA on the igroup on the NetApp side, set cfmode to single image and off you go.
Silviu
On 3/5/09 4:41 PM, "Linux Admin" sysadmin.linux@gmail.com wrote:
I solaris 10 as bad about this as ESX server? I also see the same issue with Solaris server
On Wed, Mar 4, 2009 at 6:33 PM, Jack Lyons jack1729@gmail.com wrote:
We wrote our own script that will fix the optimal path across all 12 esx servers, but over time they seem to switch but aren't sure what is causing the path to switch. We know the paths aren't going down and we are not exceeding the capacity of the ports? da totaul Thanks. Jack
Linux Admin wrote:
Thank you all for your help!
On Wed, Mar 4, 2009 at 2:52 PM, Fouquet, Errol <Errol.Fouquet@netapp.com mailto:Errol.Fouquet@netapp.com> wrote:
Accessing a LUN across a partner path will function fine ... and in some cases, there may be no problems perceived. However, IOs across the the partner path do not perform as well under load than access to the primary paths. For ESX, proper path prioritization is very easy to accomplish with the scripts provided in the Host Utilities Kit. As for the original issue on this thread ... assuming you have multipathing set up correctly, the warning may be associated with normal path checking activity. Check your lun stats data and divide the "partner kb" by "partner ops", if the answer there is ~512b ... then these warnings could simply be path checking done by multipathd. If the average IO size seems significant, open a case with support and get some assistance in determining why IO is going over the partner path. -- errol ------------------------------------------------------------------------ *From:* Fred Grieco [mailto:fredgrieco@yahoo.com mailto:fredgrieco@yahoo.com] *Sent:* Wednesday, March 04, 2009 2:21 PM
*To:* Linux Admin; NetApp Toasters List *Subject:* Re: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I'm a little foggy on the details... it's been a while since I read them or thought about it. But, assuming you mean single_image mode, host can see the luns on all four ports, and data can be accessed through all four ports. Now N1 may own the lun, but it can be accessed "through the interconnect" through N2.
Multipathing on the host determines which path will be used to access data, and whether or not the an active/passive or round robin (or whatever). If you are using Netapp multipathing, it knows to use only the N1 path. But other multipathing doesn't understand this, and will pick the primary path at random.
I have tons of VMWare hosts that have seemless esx multipathing, but pick the wrong paths by themselves. Then you get the error. I get this error all the time and really haven't had a problem with it... but it's the first thing support will bring up when you call about anything. I have noticed that it seems to run the CPU a little high (since it has to swap things in and out of memory through the interconnect).
Fred
------------------------------------------------------------------------ *From:* Linux Admin <sysadmin.linux@gmail.com mailto:sysadmin.linux@gmail.com>
*To:* NetApp Toasters List <toasters@mathworks.com mailto:toasters@mathworks.com>
*Sent:* Wednesday, March 4, 2009 2:38:19 PM *Subject:* FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
--------------- --------------- { n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
--------------- --------------- { s1 } { s2 }
\ /
\ /
--------------- { host }