Have you tried to look at sysstat and see if you are CPU bound?
1% 80 0 0 33 32 0 0 0 0 31 3% 142 0 0 64 27 0 0 0 0 31 2% 114 0 0 25 22 0 0 0 0 31 5% 358 0 0 130 59 4 0 0 0 32 3% 164 0 0 97 324 0 0 0 0 32 10% 386 0 0 39 30 0 0 0 0 32 10% 194 0 0 260 969 499 1366 0 0 32 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 3% 147 0 0 46 19 0 0 0 0 32 6% 538 0 0 127 77 0 0 0 0 32 3% 233 0 0 91 70 44 0 0 0 32 1% 55 0 0 47 87 0 0 0 0 32 1% 78 0 0 14 12 0 0 0 0 32 2% 54 0 0 11 9 0 0 0 0 32 2% 73 0 0 82 13 0 0 0 0 32 2% 65 0 0 16 14 0 0 0 0 32 2% 88 0 0 18 10 0 0 0 0 32 4% 251 0 0 132 548 24 0 0 0 32 13% 255 0 0 33 499 1149 1279 0 0 32 4% 383 0 0 173 265 0 0 0 0 32
THIS IS WHERE I START MY "dd" command from one client.
34% 641 0 0 3475 158 0 0 0 0 32 67% 1186 0 0 12199 429 447 1076 0 0 31 92% 465 0 0 10658 345 4260 12384 0 0 31 59% 366 0 0 7916 569 5942 7996 0 0 30 65% 385 0 0 2483 88 3796 9448 0 0 30 86% 532 0 0 10905 298 4822 6962 0 0 30 29% 198 0 0 3677 124 4248 6216 0 0 30 39% 185 0 0 43 33 3279 8821 0 0 30 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 81% 567 0 0 9633 289 3596 4324 0 0 30 56% 492 0 0 7431 248 4581 10072 0 0 30 58% 309 0 0 1559 80 3784 7996 0 0 30 89% 431 0 0 11055 289 4216 11152 0 0 30 54% 232 0 0 4395 120 5193 10486 0 0 30 95% 638 0 0 9654 1763 5041 9093 0 0 30 54% 235 0 0 7448 212 4836 9484 0 0 30 61% 451 0 0 2358 108 3973 6904 0 0 30 86% 403 0 0 11381 287 4020 8644 0 0 30 30% 220 0 0 3283 148 3900 8569 0 0 30 61% 266 0 0 2332 201 4016 7169 0 0 30 87% 393 0 0 11756 290 5169 8016 0 0 30 32% 85 0 0 2916 220 4280 10476 0 0 30 88% 373 0 0 6730 172 3770 10337 0 0 30 63% 365 0 0 10224 1272 4268 7307 0 0 30 54% 201 0 0 670 33 2304 10253 0 0 30 93% 399 0 0 10572 270 4432 14344 0 0 30 69% 388 0 0 7534 265 3984 7152 0 0 30 96% 786 0 0 10458 2983 5344 10208 0 0 30 43% 152 0 0 4733 140 3906 7796 0 0 30 CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 94% 596 0 0 8758 309 5590 10625 0 0 30 62% 519 0 0 8361 654 5178 9170 0 0 30 46% 475 0 0 96 91 3949 7162 0 0 30 95% 714 0 0 9087 285 3359 17425 0 0 30 THIS IS WHERE "dd" completes(takes about 30sec)
5% 267 0 0 2020 114 36 941 0 0 30 4% 316 0 0 85 90 48 0 0 0 30
I dont think one client should be able to saturate the filer.
Joe
At 05:05 PM 10/16/2003, devnull@adc.idt.com wrote:
Dear All,
I have a F740 running Ontap 6.1.1R2 with 1 Gigabit and 2 100Mbps ethernet ports.
I am seeing very poor performance on these hosts from Sun and Linux clients, running Solaris 8 and Rh 7.3(2.4.20-ac2)
I am trying to dd a 200M file and i would like to check if the bottleneck caused is because of poor disk or network performance.
Is there an easy way to check disk performance while i dd the file to a particular volume(the netapp has 3 volumes and all 3 have equally bad results)
I dont think that the Solaris and Linux clients are configured wrong, i have an F810 that for now seems to be performing pretty close to my expectations.
nfs has tcp enabled on both filers.
Anything unsual to look out for doing a snoop etc.
Can anyone also confirm whether default mount on Solaris clients is tcp or udp ? I seem to want to think that it is udp if not specified either in the NIS maps or mount options.
Thanks,
-- /dev/null
devnull@adc.idt.com
--
Joseph C King 410-455-3929 (O) Coordinator of Business Systems 410-455-1065 (F) Delta Initiative University of Maryland, Baltimore County jking@umbc.edu