Hi
I've got a problem with qtree exports after a Netapp cluster failover.
Im running VCS (Veritas Cluster Server) on 6 Solaris hosts and im using the Netapp agent to manage mounting and unmounting qtrees from a clustered Netapp.
Instead of listing your exports in the exports file you set it in your service group You allow it rsh to the filer and export the qtree to the host its running on.
So if your group fails on host A, host B will rsh in and export the qtree to host B. This way it cannot be mounted on 2 hosts at the same time.
The problem is after a Netapp cluster failover the filer reads the exports file off disk and all the qtrees that were exported before (in memory) are no longer exported and VCS just sits there waiting for it to come back.
If we stop and start the service under VCS it comes back but this is not a good solution.
What are people doing to get around this problem?
How do the people who run VCS and Oracle handle this?
thoughts ideas?
here is the example.. mbe is a netgroup of 6 hosts..
/etc/exports file
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07 -sec=sys,ro,root=mbe /vol/vol0/vmb09 -sec=sys,ro,root=mbe /vol/vol0/vmb11 -sec=sys,ro,root=mbe
before failover these exports are setup by VCS rsh'ing into filer and running command
Thu Mar 2 17:35:27 EST [cust-filer3: rshd_0:debug]: :IN:rsh shell:RSH INPUT COMMAND is exportfs -i -o sec=sys,root=mbe,rw=mcn05.msn,ro=mcn01.msn:mcn02.msn:mcn03.msn:mcn04.msn:mcn06.msn:mcn07.msn /vol/vol0/vmb11
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07 -sec=sys,ro=mcn02.msn:mcn03.msn:mcn04.msn:mcn05.msn:mcn06.msn:mcn07.msn,rw=mcn01.msn,root=mbe /vol/vol0/vmb09 -sec=sys,ro=mcn01.msn:mcn02.msn:mcn04.msn:mcn05.msn:mcn06.msn:mcn07.msn,rw=mcn03.msn,root=mbe /vol/vol0/vmb11 -sec=sys,ro=mcn01.msn:mcn02.msn:mcn03.msn:mcn04.msn:mcn06.msn:mcn07.msn,rw=mcn05.msn,root=mbe
after failover.. note exports gone..
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07 -sec=sys,ro,root=mbe /vol/vol0/vmb09 -sec=sys,ro,root=mbe /vol/vol0/vmb11 -sec=sys,ro,root=mbe
Thu Mar 2 17:35:27 EST [cust-filer3: rshd_0:debug]: :IN:rsh shell:RSH INPUT COMMAND is exportfs -i -o sec=sys,root=mbe,rw=mcn05.msn,ro=mcn01.msn:mcn02.msn:mcn03.msn:mcn04.msn:mcn06.msn:mcn07.msn /vol/vol0/vmb11
In other words: Your Solaris-Cluster is not editing your /etc/exports file in case of a Veritas-Takeover.
Change the rsh command
"exportfs -i -o " to "exportfs -p"
and the /etc/exports file will be changed automatically and thus the overtaking filer will find the right, current export settings. This should solve your requirement.
Best regards! Dirk
I think the use of -i is screwing you. Why not just edit the exports file in vol0, then do an rsh and exportfs /some/path and it will be in memory and on file. if the filer fails over it will read the exports file.
--- Greg Wilson gwilson@connect.com.au wrote:
Hi
I've got a problem with qtree exports after a Netapp cluster failover.
Im running VCS (Veritas Cluster Server) on 6 Solaris hosts and im using the Netapp agent to manage mounting and unmounting qtrees from a clustered Netapp.
Instead of listing your exports in the exports file you set it in your service group You allow it rsh to the filer and export the qtree to the host its running on.
So if your group fails on host A, host B will rsh in and export the qtree to host B. This way it cannot be mounted on 2 hosts at the same time.
The problem is after a Netapp cluster failover the filer reads the exports file off disk and all the qtrees that were exported before (in memory) are no longer exported and VCS just sits there waiting for it to come back.
If we stop and start the service under VCS it comes back but this is not a good solution.
What are people doing to get around this problem?
How do the people who run VCS and Oracle handle this?
thoughts ideas?
here is the example.. mbe is a netgroup of 6 hosts..
/etc/exports file
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07 -sec=sys,ro,root=mbe /vol/vol0/vmb09 -sec=sys,ro,root=mbe /vol/vol0/vmb11 -sec=sys,ro,root=mbe
before failover these exports are setup by VCS rsh'ing into filer and running command
Thu Mar 2 17:35:27 EST [cust-filer3: rshd_0:debug]: :IN:rsh shell:RSH INPUT COMMAND is exportfs -i -o
sec=sys,root=mbe,rw=mcn05.msn,ro=mcn01.msn:mcn02.msn:mcn03.msn:mcn04.msn:mcn06.msn:mcn07.msn
/vol/vol0/vmb11
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07
-sec=sys,ro=mcn02.msn:mcn03.msn:mcn04.msn:mcn05.msn:mcn06.msn:mcn07.msn,rw=mcn01.msn,root=mbe
/vol/vol0/vmb09
-sec=sys,ro=mcn01.msn:mcn02.msn:mcn04.msn:mcn05.msn:mcn06.msn:mcn07.msn,rw=mcn03.msn,root=mbe
/vol/vol0/vmb11
-sec=sys,ro=mcn01.msn:mcn02.msn:mcn03.msn:mcn04.msn:mcn06.msn:mcn07.msn,rw=mcn05.msn,root=mbe
after failover.. note exports gone..
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07 -sec=sys,ro,root=mbe /vol/vol0/vmb09 -sec=sys,ro,root=mbe /vol/vol0/vmb11 -sec=sys,ro,root=mbe
-- Greg Wilson Senior System Administrator greg.wilson@aapt.com.au
__________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Hi
I've got a problem with qtree exports after a Netapp cluster failover.
Im running VCS (Veritas Cluster Server) on 6 Solaris hosts and im using the Netapp agent to manage mounting and unmounting qtrees from a clustered Netapp.
Instead of listing your exports in the exports file you set it in your service group You allow it rsh to the filer and export the qtree to the host its running on.
So if your group fails on host A, host B will rsh in and export the qtree to host B. This way it cannot be mounted on 2 hosts at the same time.
The problem is after a Netapp cluster failover the filer reads the exports file off disk and all the qtrees that were exported before (in memory) are no longer exported and VCS just sits there waiting for it to come back.
If we stop and start the service under VCS it comes back but this is not a good solution.
What are people doing to get around this problem?
How do the people who run VCS and Oracle handle this?
thoughts ideas?
here is the example.. mbe is a netgroup of 6 hosts..
/etc/exports file
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07 -sec=sys,ro,root=mbe /vol/vol0/vmb09 -sec=sys,ro,root=mbe /vol/vol0/vmb11 -sec=sys,ro,root=mbe
before failover these exports are setup by VCS rsh'ing into filer and running command
Thu Mar 2 17:35:27 EST [cust-filer3: rshd_0:debug]: :IN:rsh shell:RSH INPUT COMMAND is exportfs -i -o sec=sys,root=mbe,rw=mcn05.msn,ro=mcn01.msn:mcn02.msn:mcn03.msn:mcn04.msn:mcn06.msn:mcn07.msn /vol/vol0/vmb11
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07 -sec=sys,ro=mcn02.msn:mcn03.msn:mcn04.msn:mcn05.msn:mcn06.msn:mcn07.msn,rw=mcn01.msn,root=mbe /vol/vol0/vmb09 -sec=sys,ro=mcn01.msn:mcn02.msn:mcn04.msn:mcn05.msn:mcn06.msn:mcn07.msn,rw=mcn03.msn,root=mbe /vol/vol0/vmb11 -sec=sys,ro=mcn01.msn:mcn02.msn:mcn03.msn:mcn04.msn:mcn06.msn:mcn07.msn,rw=mcn05.msn,root=mbe
after failover.. note exports gone..
/vol/vol0/vmb01 -sec=sys,ro,root=mbe /vol/vol0/vmb03 -sec=sys,ro,root=mbe /vol/vol0/vmb05 -sec=sys,ro,root=mbe /vol/vol0/vmb07 -sec=sys,ro,root=mbe /vol/vol0/vmb09 -sec=sys,ro,root=mbe /vol/vol0/vmb11 -sec=sys,ro,root=mbe
-- Greg Wilson Senior System Administrator greg.wilson@aapt.com.au
I would suggest editing the filer's /etc/exports file to change the line for /vol/vol0/vmbXX and then run this command:
exportfs /vol/vol0/vmbXX
That way if you fail over then your exports file is correct. Assuming your clients are not NFS mounting the filer's /vol/vol0, you can use the netapp "rdfile" and "wrfile" commands to read the exports file and then write back the new one like this:
# make local copy of filer's exports file
rsh filer rdfile /vol/vol0/etc/exports > /tmp/exports.old
# put the new exports line in a shell variable
NEWLINE='/vol/vol0/vmbXX -sec=sys, ...'
# use sed to create a new local exports file from the old one
sed "s?^/vol/vol0/vmbXX.*?$NEWLINE?" /tmp/exports.old > /tmp/exports.new
# put the new local exports file back on the filer
rsh filer wrfile /vol/vol0/etc/exports < /tmp/exports.new
# re-export /vol/vol0/vmbXX
rsh filer exportfs /vol/vol0/vmbXX
Watch out for "wrfile filename" because it always destroys any old data in "filename". I suggest testing the file editing script carefully using a file other than /vol/vol0/etc/exports!
Of course, if your NFS client mounts /vol/vol0 then you don't need to use rdfile and wrfile because you have NFS access to the filer's exports file.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support