I'm a little foggy on the details... it's been a while since I read them or thought about it. But, assuming you mean single_image mode, host can see the luns on all four ports, and data can be accessed through all four ports. Now N1 may own the lun, but it can be accessed "through the interconnect" through N2.
Multipathing on the host determines which path will be used to access data, and whether or not the an active/passive or round robin (or whatever). If you are using Netapp multipathing, it knows to use only the N1 path. But other multipathing doesn't understand this, and will pick the primary path at random.
I have tons of VMWare hosts that have seemless esx multipathing, but pick the wrong paths by themselves. Then you get the error. I get this error all the time and really haven't had a problem with it... but it's the first thing support will bring up when you call about anything. I have noticed that it seems to run the CPU a little high (since it has to swap things in and out of memory through the interconnect).
Fred
________________________________ From: Linux Admin sysadmin.linux@gmail.com To: NetApp Toasters List toasters@mathworks.com Sent: Wednesday, March 4, 2009 2:38:19 PM Subject: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
Hello Toasters.
I am receiving the following message from auto support: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.
I have a single cluster acting in single fabric mode and it is wired up in the following way. Each host has an i-group on each node of the cluster for. Each of the netapp node is connected to sw1 and sw2 and hosts also has a path to sw1 and sw2. Why would the host access luns on N2 via partner node N1
--------------- --------------- { n1 } { n2 }
| \ / | | \ / | | \ / | | \ / |
--------------- --------------- { s1 } { s2 }
\ /
\ /
--------------- { host }