Basically, you have pegged the problem.
If you turn nfs.tcp.enable ON you have to leave it on as clients idly (or wildly:-) automount. Disabling it (explicitly or with reboot implicitly if option not set in /etc/rc file) will cause your clients to hang - since TCP is no longer a transport.
Same thing happens if you start playing with enabling and disabling Version 3.
Anyway, I have a question:
- Has anyone enable NFS/TCP on a NetApp box permanently? If so, why?
- Has any of you customers disabled Version 3 NFS for any reason, if so why?
I'm curious.
beepy
Guy Harris wrote:
I probably didn't phrase the question as well as I should have. If I enable NFS over TCP, then mount a single host using TCP and do testing, will that stomp on the dozens of other clients that still have their UDP mounts?
"Stomp" in what sense?
Enabling NFS-over-TCP doesn't disable NFS-over-UDP, so it won't stomp on them in that sense.
If those clients reboot, then they probably will, if their NFS client code supports NFS-over-TCP, and they haven't been configured to force NFS-over-UDP access, use NFS-over-TCP to the machine with NFS-over-TCP enabled.
We're seen a situation were someone "tested" NFS-over-TCP and by the time they were done, many, many, many hosts (all running very active automounters) got NFS-over-TCP connections to the NetApp.
This has to be followed by a day of rebooting the UNIX boxes. Not fun.
(Ok, ok, the rebooting can be avoided if you can manually umount from the server... but that isn't very often around here)
I wish there was a "do nfs-over-tcp on current mounts but not on new mount requests" option.
--tal