I have seen this before. It has something to do with the mmap routine called by the cp program in Solaris 2.6. I believe that it is fixed in Solaris 2.8 build 33.
Using a dd everyting goes OK. Maybe it is also possible to "tar" the directory.
CIAO =============================================================== Federico VENIER System Engineer Phone : +39 039 6858483 Network Appliance Srl Fax : +39 039 6858485 Centro Direzionale Torri Bianche Mob. : +39 0348 4719025 Palazzo Larice mail : fvenier@netapp.com 20059 Vimercate (MI) - Italy http://www.netapp.com ===============================================================
-----Original Message----- From: Toal, Dave [mailto:dave.toal@t-t.com] Sent: lunedì 15 gennaio 2001 21:00 To: 'toasters@mathworks.com' Cc: Toal, Dave Subject: solaris 2.6 nfs client tuning?
Hey, all.
This lots-of-nfs-mounts topic inspires me to ask yet another sanity-check question:
Currently I'm fixing a process which moves lots of data from one vol to another. Can't simply vol copy because there are things I can't over-write on the destination. This data is in about 40 directories, with maybe 2G in each.
The method I'm using is to set up many pairs of mounts, one for each directory, all from the same two vols. This is because the nfs client, a sun 450 running solaris 2.6, shows huge lag for ls, cp, whatever, in a mounted directory where a cp is already running. iostat shows the nfsd devices at 100% busy -- for destination and source -- when a cp runs between a mount-point pair.
So:
filer1:/vol/A/d1 /mnt/A_01 filer2:/vol/B/d1 /mnt/B_01
filer1:/vol/A/d2 /mnt/A_02 filer2:/vol/B/d2 /mnt/B_02
filer1:/vol/A/d3 /mnt/A_03 filer2:/vol/B/d3 /mnt/B_03
ad nauseum... and if I cp /mnt/A_01/fatfile to /mnt/B_01 then there's no i/o lag whatsoever in /mnt/A_02.
These filers are 760's, by the way.
Then I discovered a serious "sweet spot" at a very low number. "max" is the number of simultaneous copies between mount point pairs. "k/sec" are my visual estimates of _each_ cp rate, from watching iostat for about a minute; rates jump around by +- 100 k/sec. Column on the right is total throughput for all cp.
max=3 ~ 2300 k/sec |= 6900 max=4 ~ 1800 k/sec |= 7200 max=5 ~ 1500 k/sec |= 7500 max=6 ~ 1200 k/sec |= 7200 max=8 ~ 800 k/sec |= 6400 max=9 ~ 800 k/sec |= 7200 max=10 ~ 600 k/sec |= 6000
I've seen much higher data rates from the 760's.
So I'm thinking this means I need to learn nfs client tuning for solaris. Yes?
Dave
-- dave toal systems optimist Thomson & Thomson N. Quincy, MA
-----Original Message----- From: Jeffrey Krueger To: Rainchik, Aleksandr (MED, Non GE) Cc: 'Jeffrey Krueger '; 'toasters@mathworks.com ' Sent: 1/15/01 1:37 PM Subject: Re: /home layout with many filers and NIS automount
On Sat, Jan 13, 2001 at 03:16:20PM -0600, Rainchik, Aleksandr (MED, Non GE) wrote:
There is another thing against
#auto_home NIS map user1 filer2:/vol/vol1/&
design. We have a lot of development servers. So any time you add/delete/modify a single user you should go trough _all_ the NIS/automount clients and run "automount" to let automountd know that passwd map has been just updated. Am I wrong again?
This is correct, but I simple cron job which runs periodically on all NIS clients should make short work of this task. =)
-- Jeff