Whenever we see "NFS server not responding" we immediately check 'systat' on the F330s and find that they're operating near peak, and the suspicion is (not being a NetApp designer) that response time is degraded enough that the NFS timers have timed out on the client waiting for a response.
One doesn't need to be a NetApp designer to guess that one - an NFS client will time out if the server doesn't respond fast enough, regardless of whether the server is a filer or not.
(Basically, there are two levels of timeout-and-retry with NFS.
NFS runs atop ONC RPC. When ONC RPC runs atop "unreliable" transports such as UDP, it will retransmit a request if it doesn't get a reply quickly enough. It does that a small number of times, and then returns an "timed out" error to its caller. When it runs atop "reliable" transports such as TCP, it leaves that retransmission up to the transport layer - but if it doesn't get a response back quickly enough, for a value of "quickly enough" larger than for the unreliable-transport retransmission, it gives up and returns a "timed out" error to its caller, on the theory that the server presumably got the request - as the transport didn't return a "connection timed out" error - but somehow didn't manage to handle it or get a reply back.
Most callers probably give up if RPC returns a "timed out" error. That's what NFS does with a soft mount. However, with a hard mount, NFS will log an "NFS server not responding" error, and make another RPC call, and if that times out, it'll make another call, until it gets one that succeeds or, if the mount was with "intr", somebody interrupts the loop with a signal.)
So if you peg a server, you could get "NFS server not responding". I think there may be, hiding somewhere around here, a set of rules for running on a filer some of the undocumented commands talked about in another thread to get information to see what the bottleneck is (main memory? NVRAM? disks? CPU?) and see what needs to be done to remove it; Tech Support might have that (Beepy?).