Mount settings

Flags: rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys

AFAIK these are the defaults.

I have not seen any delays that are on the order of seconds.  What I see is a steady stream of 1 requests 1 response.  What I was hoping to see is a multiple requests followed by multiple responses (to deal w/ the latency).

  arnold



On Thu, Oct 17, 2013 at 11:29 AM, Jordan Slingerland <Jordan.Slingerland@independenthealth.com> wrote:

 

Though generally, the more files the slower things are going to be, 600 files doesn’t really seem like that many. 

 

Are there any long delays in excess of several seconds in your packet capture? 

How long is the transfer actually taking?  Compared to a xfer of the same data in a tar? 

 

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Arnold de Leon
Sent: Thursday, October 17, 2013 2:17 PM
To: toasters
Subject: Slow copy of a directory full of files via an NFS client across a WAN

 

I have an NFS client (CentOS 5) that is mounting a volume from filer across a WAN with about 20ms of latency (40 ms round trip).  We have an application that makes a "backup" copy of a directory of data before modifying it.  The directory can contain a lot tiny of files (for example 600 files, taking up less than 12MB).  I'm using NFS V3 over TCP (also tried V3 over UDP).

 

The NFS copy (using cp -a) is glacial.  Looking at the WireShark packet captures shows the client and the server in a deadly lock step request/reply situation (typically GETATTR and SETATTR).  As far as I can tell the client/server are not allowing any windowing/pipelining of requests.  Is there a setting I am missing to enable more outstanding requests (on the client or the server)?  All the searches I've done for tuning seem to be about fixing issues for large bulk transfers.  Is the NFS protocol inherently limited in this way.  I expected this kind of behavior in SMB 1 but not with NFS.

 

Thanks.

 

  arnold