Dear All,
I have a F740 running Ontap 6.1.1R2 with 1 Gigabit and 2 100Mbps ethernet ports.
I am seeing very poor performance on these hosts from Sun and Linux clients, running Solaris 8 and Rh 7.3(2.4.20-ac2)
I am trying to dd a 200M file and i would like to check if the bottleneck caused is because of poor disk or network performance.
Is there an easy way to check disk performance while i dd the file to a particular volume(the netapp has 3 volumes and all 3 have equally bad results)
I dont think that the Solaris and Linux clients are configured wrong, i have an F810 that for now seems to be performing pretty close to my expectations.
nfs has tcp enabled on both filers.
Anything unsual to look out for doing a snoop etc.
Can anyone also confirm whether default mount on Solaris clients is tcp or udp ? I seem to want to think that it is udp if not specified either in the NIS maps or mount options.
Thanks,
Have you tried to look at sysstat and see if you are CPU bound?
Joe
At 05:05 PM 10/16/2003, devnull@adc.idt.com wrote:
Dear All,
I have a F740 running Ontap 6.1.1R2 with 1 Gigabit and 2 100Mbps ethernet ports.
I am seeing very poor performance on these hosts from Sun and Linux clients, running Solaris 8 and Rh 7.3(2.4.20-ac2)
I am trying to dd a 200M file and i would like to check if the bottleneck caused is because of poor disk or network performance.
Is there an easy way to check disk performance while i dd the file to a particular volume(the netapp has 3 volumes and all 3 have equally bad results)
I dont think that the Solaris and Linux clients are configured wrong, i have an F810 that for now seems to be performing pretty close to my expectations.
nfs has tcp enabled on both filers.
Anything unsual to look out for doing a snoop etc.
Can anyone also confirm whether default mount on Solaris clients is tcp or udp ? I seem to want to think that it is udp if not specified either in the NIS maps or mount options.
Thanks,
-- /dev/null
devnull@adc.idt.com
-- ---------------------------------------------------------------- Joseph C King 410-455-3929 (O) Coordinator of Business Systems 410-455-1065 (F) Delta Initiative University of Maryland, Baltimore County jking@umbc.edu
Have you tried to look at sysstat and see if you are CPU bound?
1% 80 0 0 33 32 0 0 0 0 31 3% 142 0 0 64 27 0 0 0 0 31 2% 114 0 0 25 22 0 0 0 0 31 5% 358 0 0 130 59 4 0 0 0 32 3% 164 0 0 97 324 0 0 0 0 32 10% 386 0 0 39 30 0 0 0 0 32 10% 194 0 0 260 969 499 1366 0 0 32 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 3% 147 0 0 46 19 0 0 0 0 32 6% 538 0 0 127 77 0 0 0 0 32 3% 233 0 0 91 70 44 0 0 0 32 1% 55 0 0 47 87 0 0 0 0 32 1% 78 0 0 14 12 0 0 0 0 32 2% 54 0 0 11 9 0 0 0 0 32 2% 73 0 0 82 13 0 0 0 0 32 2% 65 0 0 16 14 0 0 0 0 32 2% 88 0 0 18 10 0 0 0 0 32 4% 251 0 0 132 548 24 0 0 0 32 13% 255 0 0 33 499 1149 1279 0 0 32 4% 383 0 0 173 265 0 0 0 0 32
THIS IS WHERE I START MY "dd" command from one client.
34% 641 0 0 3475 158 0 0 0 0 32 67% 1186 0 0 12199 429 447 1076 0 0 31 92% 465 0 0 10658 345 4260 12384 0 0 31 59% 366 0 0 7916 569 5942 7996 0 0 30 65% 385 0 0 2483 88 3796 9448 0 0 30 86% 532 0 0 10905 298 4822 6962 0 0 30 29% 198 0 0 3677 124 4248 6216 0 0 30 39% 185 0 0 43 33 3279 8821 0 0 30 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 81% 567 0 0 9633 289 3596 4324 0 0 30 56% 492 0 0 7431 248 4581 10072 0 0 30 58% 309 0 0 1559 80 3784 7996 0 0 30 89% 431 0 0 11055 289 4216 11152 0 0 30 54% 232 0 0 4395 120 5193 10486 0 0 30 95% 638 0 0 9654 1763 5041 9093 0 0 30 54% 235 0 0 7448 212 4836 9484 0 0 30 61% 451 0 0 2358 108 3973 6904 0 0 30 86% 403 0 0 11381 287 4020 8644 0 0 30 30% 220 0 0 3283 148 3900 8569 0 0 30 61% 266 0 0 2332 201 4016 7169 0 0 30 87% 393 0 0 11756 290 5169 8016 0 0 30 32% 85 0 0 2916 220 4280 10476 0 0 30 88% 373 0 0 6730 172 3770 10337 0 0 30 63% 365 0 0 10224 1272 4268 7307 0 0 30 54% 201 0 0 670 33 2304 10253 0 0 30 93% 399 0 0 10572 270 4432 14344 0 0 30 69% 388 0 0 7534 265 3984 7152 0 0 30 96% 786 0 0 10458 2983 5344 10208 0 0 30 43% 152 0 0 4733 140 3906 7796 0 0 30 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 94% 596 0 0 8758 309 5590 10625 0 0 30 62% 519 0 0 8361 654 5178 9170 0 0 30 46% 475 0 0 96 91 3949 7162 0 0 30 95% 714 0 0 9087 285 3359 17425 0 0 30 THIS IS WHERE "dd" completes(takes about 30sec)
5% 267 0 0 2020 114 36 941 0 0 30 4% 316 0 0 85 90 48 0 0 0 30
I dont think one client should be able to saturate the filer.
Joe
At 05:05 PM 10/16/2003, devnull@adc.idt.com wrote:
Dear All,
I have a F740 running Ontap 6.1.1R2 with 1 Gigabit and 2 100Mbps ethernet ports.
I am seeing very poor performance on these hosts from Sun and Linux clients, running Solaris 8 and Rh 7.3(2.4.20-ac2)
I am trying to dd a 200M file and i would like to check if the bottleneck caused is because of poor disk or network performance.
Is there an easy way to check disk performance while i dd the file to a particular volume(the netapp has 3 volumes and all 3 have equally bad results)
I dont think that the Solaris and Linux clients are configured wrong, i have an F810 that for now seems to be performing pretty close to my expectations.
nfs has tcp enabled on both filers.
Anything unsual to look out for doing a snoop etc.
Can anyone also confirm whether default mount on Solaris clients is tcp or udp ? I seem to want to think that it is udp if not specified either in the NIS maps or mount options.
Thanks,
-- /dev/null
devnull@adc.idt.com
--
Joseph C King 410-455-3929 (O) Coordinator of Business Systems 410-455-1065 (F) Delta Initiative University of Maryland, Baltimore County jking@umbc.edu
devnull@adc.idt.com wrote:
... I dont think one client should be able to saturate the filer.
it's a 740. one client can easily write-saturate it.
-skottie
On Thu, Oct 16, 2003 at 05:36:10PM -0400, devnull@adc.idt.com wrote:
Have you tried to look at sysstat and see if you are CPU bound?
look at sysstat -x 1, notably, look for disk utilization and the the colum that has ':,F,T,B' in it. If you see lots of 'B' (back-to-back flushes) than you are simply maxing out your filers ability to flush NVRAM to disk. You may be able to speed it up by adding spindles (easiest) and possible some other tweaks.
It's also possible that you've got packet loss on the network and are getting lots of retrans.
If this is a new installation, then you should check your network connection for a port conflict. I recommend that you turn off the auto-negotiate feature and hard set the network port and the filer at 100mbps full-duplex.
-Ed
devnull@adc.idt.com wrote:
Dear All,
I have a F740 running Ontap 6.1.1R2 with 1 Gigabit and 2 100Mbps ethernet ports.
I am seeing very poor performance on these hosts from Sun and Linux clients, running Solaris 8 and Rh 7.3(2.4.20-ac2)
I am trying to dd a 200M file and i would like to check if the bottleneck caused is because of poor disk or network performance.
Is there an easy way to check disk performance while i dd the file to a particular volume(the netapp has 3 volumes and all 3 have equally bad results)
I dont think that the Solaris and Linux clients are configured wrong, i have an F810 that for now seems to be performing pretty close to my expectations.
nfs has tcp enabled on both filers.
Anything unsual to look out for doing a snoop etc.
Can anyone also confirm whether default mount on Solaris clients is tcp or udp ? I seem to want to think that it is udp if not specified either in the NIS maps or mount options.
Thanks,
If this is a new installation, then you should check your network connection for a port conflict. I recommend that you turn off the auto-negotiate feature and hard set the network port and the filer at 100mbps full-duplex.
ifconfig e8 `hostname`-e8 mediatype auto-1000sx netmask 255.255.255.0 flowcontrol set to full.
I dont have Flowcontrol set on my switch...do you think that might cause these problems ?
I have had this same setup for a while now and so not sure what is causing this sudden drop in performance.
-Ed
devnull@adc.idt.com wrote:
Dear All,
I have a F740 running Ontap 6.1.1R2 with 1 Gigabit and 2 100Mbps ethernet ports.
I am seeing very poor performance on these hosts from Sun and Linux clients, running Solaris 8 and Rh 7.3(2.4.20-ac2)
I am trying to dd a 200M file and i would like to check if the bottleneck caused is because of poor disk or network performance.
Is there an easy way to check disk performance while i dd the file to a particular volume(the netapp has 3 volumes and all 3 have equally bad results)
I dont think that the Solaris and Linux clients are configured wrong, i have an F810 that for now seems to be performing pretty close to my expectations.
nfs has tcp enabled on both filers.
Anything unsual to look out for doing a snoop etc.
Can anyone also confirm whether default mount on Solaris clients is tcp or udp ? I seem to want to think that it is udp if not specified either in the NIS maps or mount options.
Thanks,
Interestingly enough, I had a similar situation a few weeks ago on my F760. Here is what I did to solve the problem. It turns out if your filer has a Gig-e card and your clients are 100mb, or certain gig-e cards, particularly in solaris, the filer blasts the data to the 100mb card in such a fashion that it can't keep up. This to me seems odd, however, I was curious on duplex settings as well, so I disproved the network issues, by transfering the same file (made with mkfile to size desired) via ftp. FTP was more than 30x faster than nfs, obviously something wrong there.
I mounted the same dir in a different spot, using udp, but i dropped the rsize and wsize down to 8k. In nfsv2 the default was 8k, and in certain filer versions I believe it changed, but I am not sure. nfsv3 is now 32k instead of 8k. The change was blazingly different. 35 minute file transfer went to <2 min in some cases.
I have similar machines with 100mb cards that don't have the problem. This makes it more confusing. Under normal circumstances you *do* want the larger chunk sizes.... and indeed testing showed dropping most gig-e machines down to 8192 cost some performance. If you can deal with dropping it on every host, set it to 8192 on the filer and mounts will never be anything but 8k or smaller. Clients cannot negotiate higher than the filer is set. I ended up putting local automounts on the specific boxes I had the problems with because the global automount was mounted on varying systems.
Best is to try a manual mount first with -o proto=udp,rsize=8192,wsize=8192 and toy with the settings from there. First disprove any network issues (like duplex mismatches) with ftp's/scp's.
Jerry
--- Ed Marsh edmarsh@ti.com wrote:
If this is a new installation, then you should check your network connection for a port conflict. I recommend that you turn off the auto-negotiate feature and hard set the network port and the filer at 100mbps full-duplex.
-Ed
devnull@adc.idt.com wrote:
Dear All,
I have a F740 running Ontap 6.1.1R2 with 1 Gigabit
and 2 100Mbps ethernet
ports.
I am seeing very poor performance on these hosts
from Sun and Linux
clients, running Solaris 8 and Rh 7.3(2.4.20-ac2)
I am trying to dd a 200M file and i would like to
check if the bottleneck
caused is because of poor disk or network
performance.
Is there an easy way to check disk performance
while i dd the file to a
particular volume(the netapp has 3 volumes and all
3 have equally bad
results)
I dont think that the Solaris and Linux clients are
configured wrong, i
have an F810 that for now seems to be performing
pretty close to my
expectations.
nfs has tcp enabled on both filers.
Anything unsual to look out for doing a snoop etc.
Can anyone also confirm whether default mount on
Solaris clients is tcp or
udp ? I seem to want to think that it is udp if not
specified either in
the NIS maps or mount options.
Thanks,
__________________________________ Do you Yahoo!? The New Yahoo! Shopping - with improved product search http://shopping.yahoo.com