On Fri, 12 Feb 1999, Gordon Keegan wrote:
Speaking of NFS over TCP, we'd also like to try turning nfs.tcp.enable on to do some testing. Does anyone know if this would affect current mounts if we just enable it at the filer command line with "options nfs.tcp.enable on", or can that be done?
Your clients aren't currently mounting with tcp, you'll have to have them umount the server and then remount after you've enabled it on the server. So it is kind of a chore.
I probably didn't phrase the question as well as I should have. If I enable NFS over TCP, then mount a single host using TCP and do testing, will that stomp on the dozens of other clients that still have their UDP mounts? (there; hopefully that was better :)
I probably didn't phrase the question as well as I should have. If I enable NFS over TCP, then mount a single host using TCP and do testing, will that stomp on the dozens of other clients that still have their UDP mounts?
"Stomp" in what sense?
Enabling NFS-over-TCP doesn't disable NFS-over-UDP, so it won't stomp on them in that sense.
If those clients reboot, then they probably will, if their NFS client code supports NFS-over-TCP, and they haven't been configured to force NFS-over-UDP access, use NFS-over-TCP to the machine with NFS-over-TCP enabled.
Guy Harris wrote:
I probably didn't phrase the question as well as I should have. If I enable NFS over TCP, then mount a single host using TCP and do testing, will that stomp on the dozens of other clients that still have their UDP mounts?
"Stomp" in what sense?
Enabling NFS-over-TCP doesn't disable NFS-over-UDP, so it won't stomp on them in that sense.
If those clients reboot, then they probably will, if their NFS client code supports NFS-over-TCP, and they haven't been configured to force NFS-over-UDP access, use NFS-over-TCP to the machine with NFS-over-TCP enabled.
We're seen a situation were someone "tested" NFS-over-TCP and by the time they were done, many, many, many hosts (all running very active automounters) got NFS-over-TCP connections to the NetApp.
This has to be followed by a day of rebooting the UNIX boxes. Not fun.
(Ok, ok, the rebooting can be avoided if you can manually umount from the server... but that isn't very often around here)
I wish there was a "do nfs-over-tcp on current mounts but not on new mount requests" option.
--tal
Basically, you have pegged the problem.
If you turn nfs.tcp.enable ON you have to leave it on as clients idly (or wildly:-) automount. Disabling it (explicitly or with reboot implicitly if option not set in /etc/rc file) will cause your clients to hang - since TCP is no longer a transport.
Same thing happens if you start playing with enabling and disabling Version 3.
Anyway, I have a question:
- Has anyone enable NFS/TCP on a NetApp box permanently? If so, why?
- Has any of you customers disabled Version 3 NFS for any reason, if so why?
I'm curious.
beepy
Guy Harris wrote:
I probably didn't phrase the question as well as I should have. If I enable NFS over TCP, then mount a single host using TCP and do testing, will that stomp on the dozens of other clients that still have their UDP mounts?
"Stomp" in what sense?
Enabling NFS-over-TCP doesn't disable NFS-over-UDP, so it won't stomp on them in that sense.
If those clients reboot, then they probably will, if their NFS client code supports NFS-over-TCP, and they haven't been configured to force NFS-over-UDP access, use NFS-over-TCP to the machine with NFS-over-TCP enabled.
We're seen a situation were someone "tested" NFS-over-TCP and by the time they were done, many, many, many hosts (all running very active automounters) got NFS-over-TCP connections to the NetApp.
This has to be followed by a day of rebooting the UNIX boxes. Not fun.
(Ok, ok, the rebooting can be avoided if you can manually umount from the server... but that isn't very often around here)
I wish there was a "do nfs-over-tcp on current mounts but not on new mount requests" option.
--tal
Brian Pawlowski beepy@netapp.com writes:
Anyway, I have a question:
- Has anyone enable NFS/TCP on a NetApp box permanently? If so, why?
(All of the below info only applies to Solaris here.)
Yes. Because when the option was added to the NetApp OS, our Sun clients needed it. I've kept the option because it's never caused any problems (well, until recently). One way that I look at it is that NFS/TCP is tested in our environment, NFS/UDP is not.
(The recent problem is a Solaris bug where Solaris boxes can't reestablish NFS/TCP connections if the TCP route is ever dropped. It only affects Sun-NetApp connections that cross a router. Using a default route or UDP, we couldn't reproduce the problem.)
We considered disabling NFS/TCP because of the above problem, but since we were planning on changing our Suns to use router discovery (creating a default route), it wasn't necessary. Previously, we were running "routed -q" on all Suns (a historical artifact).
Note that we wouldn't have disabled the NetApp option until after all Suns had stopped using it. We would force udp mounting on each Sun client via the udp mount option and then wait a very long time.
Actually, the default route *can* actually be dropped if the router is down for longer than the router discovery protocol timeout, so we're not 100% clear of the problem. Maybe we'll switch to UDP...
- Has any of you customers disabled Version 3 NFS for any reason, if so why?
No.
- Dan
There was a problem between NetApp and DEC. The DEC NFS v3 was not really capable of handling the readdir response from NetApp. So, NetApp made their OS to be more forgiving to DEC (so they would not hang anymore) and eventually DEC made their NFS v3 able to handle real v3 responses. We had turned off v3 mounting at the client end and not actually at the filer. I think version 4.3R4 was the NetApp fixed version and 4.0D with kit#2 was the fixed Compaq (Digital UNIX) version.
At 10:29 PM -0800 2/14/99, Brian Pawlowski wrote:
Basically, you have pegged the problem.
If you turn nfs.tcp.enable ON you have to leave it on as clients idly (or wildly:-) automount. Disabling it (explicitly or with reboot implicitly if option not set in /etc/rc file) will cause your clients to hang - since TCP is no longer a transport.
Same thing happens if you start playing with enabling and disabling Version 3.
Anyway, I have a question:
- Has anyone enable NFS/TCP on a NetApp box permanently? If so, why?
** Yes.! > - Has any of you customers disabled Version 3 NFS for
any reason, if so why?
I'm curious.
beepy
Guy Harris wrote:
I probably didn't phrase the question as well as I should have. If I enable NFS over TCP, then mount a single host using TCP and do testing, will that stomp on the dozens of other clients that still have their UDP mounts?
"Stomp" in what sense?
Enabling NFS-over-TCP doesn't disable NFS-over-UDP, so it won't stomp on them in that sense.
If those clients reboot, then they probably will, if their NFS client code supports NFS-over-TCP, and they haven't been configured to force NFS-over-UDP access, use NFS-over-TCP to the machine with NFS-over-TCP enabled.
We're seen a situation were someone "tested" NFS-over-TCP and by the time they were done, many, many, many hosts (all running very active automounters) got NFS-over-TCP connections to the NetApp.
This has to be followed by a day of rebooting the UNIX boxes. Not fun.
(Ok, ok, the rebooting can be avoided if you can manually umount from the server... but that isn't very often around here)
I wish there was a "do nfs-over-tcp on current mounts but not on new mount requests" option.
--tal
}}}===============>> LLNL James E. Harm (Jim); jharm@llnl.gov (925) 422-4018 Page: 423-7705x57152