I solaris 10 as bad about this as ESX server? I also see the same issue with Solaris server
On Wed, Mar 4, 2009 at 6:33 PM, Jack Lyons jack1729@gmail.com wrote:
We wrote our own script that will fix the optimal path across all 12 esx servers, but over time they seem to switch but aren't sure what is causing the path to switch. We know the paths aren't going down and we are not exceeding the capacity of the ports?
Thanks. Jack
Linux Admin wrote:
Thank you all for your help!
On Wed, Mar 4, 2009 at 2:52 PM, Fouquet, Errol <Errol.Fouquet@netapp.commailto: Errol.Fouquet@netapp.com> wrote:
Accessing a LUN across a partner path will function fine ... and in some cases, there may be no problems perceived. However, IOs across the the partner path do not perform as well under load than access to the primary paths. For ESX, proper path prioritization is very easy to accomplish with the scripts provided in the Host Utilities Kit. As for the original issue on this thread ... assuming you have multipathing set up correctly, the warning may be associated with normal path checking activity. Check your lun stats data and divide the "partner kb" by "partner ops", if the answer there is ~512b ... then these warnings could simply be path checking done by multipathd. If the average IO size seems significant, open a case with support and get some assistance in determining why IO is going over the partner path. -- errol
*From:* Fred Grieco [mailto:fredgrieco@yahoo.com mailto:fredgrieco@yahoo.com] *Sent:* Wednesday, March 04, 2009 2:21 PM
*To:* Linux Admin; NetApp Toasters List *Subject:* Re: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I'm a little foggy on the details... it's been a while since I read them or thought about it. But, assuming you mean single_image mode, host can see the luns on all four ports, and data can be accessed through all four ports. Now N1 may own the lun, but it can be accessed "through the interconnect" through N2.
Multipathing on the host determines which path will be used to access data, and whether or not the an active/passive or round robin (or whatever). If you are using Netapp multipathing, it knows to use only the N1 path. But other multipathing doesn't understand this, and will pick the primary path at random.
I have tons of VMWare hosts that have seemless esx multipathing, but pick the wrong paths by themselves. Then you get the error. I get this error all the time and really haven't had a problem with it... but it's the first thing support will bring up when you call about anything. I have noticed that it seems to run the CPU a little high (since it has to swap things in and out of memory through the interconnect).
Fred
*From:* Linux Admin <sysadmin.linux@gmail.com mailto:sysadmin.linux@gmail.com> *To:* NetApp Toasters List <toasters@mathworks.com mailto:toasters@mathworks.com> *Sent:* Wednesday, March 4, 2009 2:38:19 PM *Subject:* FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
--------------- ---------------
{ n1 } { n2 }
| \ / | | \ / | | \ / | | \ / | --------------- ---------------
{ s1 } { s2 }
\ / \ / --------------- { host }