Hi Folks - we have some admittedly underpowered 270c in a mixed cifs/nfs environment. We are seeing a random performance issue whereby cp time is increased and the resultant nfs ops it serves drops considerably, resulting in an effective outage.
The question is when I look at the sysstat output collected during these periods a lot of time is spent checkpointing 'special files' - its been a while since I covered ontap theory so if someone could explain this in further detail it would be helpful. The now docs omit extended explanations.
Below is a sample of normal vs check pointing sysstat output.
Thanks,
Daryl
Normal
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s in out read write read write age hit time ty util in out 92% 1758 2 0 1760 495 2621 6404 0 0 0 41s 75% 0% - 55% 0 0 0 0 89% 2759 3 0 2762 781 6441 11140 6 0 0 22s 83% 0% - 68% 0 0 0 0 96% 2605 20 0 2625 1566 7347 12961 24 0 0 41s 78% 0% - 70% 0 0 0 0 97% 2224 3 0 2227 688 4856 8198 0 0 0 41s 84% 0% - 49% 0 0 0 0 92% 4027 2 0 4029 1860 5267 9829 12 0 0 14s 89% 0% - 72% 0 0 0 0 96% 2136 0 0 2136 535 1809 6405 17 0 0 42s 85% 0% - 64% 0 0 0 0 95% 2383 0 0 2383 642 3245 8769 0 0 0 42s 82% 0% - 67% 0 0 0 0 97% 2210 0 0 2210 595 3823 8309 1578 0 0 14s 89% 66% Tn 61% 0 0 0 0 85% 4889 0 0 4889 1881 5163 15287 7252 0 0 23s 85% 100% :s 100% 0 0 0 0 90% 2393 3 0 2396 817 10048 18018 2474 0 0 39s 89% 100% :s 70% 0 0 0 0 95% 1603 18 0 1621 913 4532 8437 4025 0 0 40s 88% 100% :v 54% 0 0 0 0 94% 2260 0 0 2260 1109 4061 8470 51 0 0 40s 82% 12% : 61% 0 0 0 0 88% 3532 3 1 3536 890 5716 11892 0 0 0 24s 83% 0% - 71% 0 0 0 0 92% 3347 2 0 3349 2197 6462 11603 20 0 0 23s 87% 0% - 68% 0 0 0 0 94% 3607 17 0 3624 1013 6285 11300 7 0 0 15s 85% 0% - 69% 0 0 0 0 95% 2359 7 0 2366 598 4574 8985 0 0 0 41s 84% 0% - 69% 0 0 0 0 94% 3981 0 0 3981 1098 4065 10324 23 0 0 39s 86% 0% - 74% 0 0 0 0 83% 3965 0 0 3965 946 4709 11296 0 0 0 39s 82% 0% - 83% 0 0 0 0 88% 3318 0 0 3318 1009 5220 11772 24 0 0 23s 83% 0% - 86% 0 0 0 0 88% 2821 0 0 2821 750 3974 9762 8 0 0 14s 79% 0% - 69% 0 0 0 0
Check Pointed
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s in out read write read write age hit time ty util in out 98% 1651 7 0 1658 669 2196 8671 15 0 0 49s 88% 0% - 44% 0 0 0 0 99% 567 0 0 567 176 1022 2601 3 0 0 46s 87% 0% - 41% 0 0 0 0 99% 614 0 0 614 205 1632 3408 8 0 0 1 89% 22% :n 27% 0 0 0 0 99% 929 1 0 930 252 1803 3865 237 0 0 44s 89% 100% Tn 32% 0 0 0 0 98% 463 0 0 463 385 1338 4066 1224 0 0 1 91% 100% :n 32% 0 0 0 0 98% 398 6 0 404 145 1894 5134 1787 0 0 1 81% 100% :s 34% 0 0 0 0 99% 558 0 0 558 186 1716 3675 912 0 0 50s 89% 100% :s 34% 0 0 0 0 99% 677 1 0 678 919 902 2892 587 0 0 1 87% 100% :s 33% 0 0 0 0 99% 703 1 0 704 293 1179 3173 731 0 0 52s 89% 100% :s 26% 0 0 0 0 97% 2250 17 0 2267 717 3024 10358 3462 0 0 52s 94% 100% :v 63% 0 0 0 0 98% 1032 0 0 1032 619 933 2482 81 0 0 49s 92% 11% : 33% 0 0 0 0 99% 875 1 0 876 293 2328 3818 6 0 0 44s 92% 0% - 33% 0 0 0 0 99% 325 0 0 325 116 579 1595 5 0 0 42s 91% 0% - 23% 0 0 0 0 99% 884 1 0 885 382 1613 3723 10 0 0 1 93% 38% :n 24% 0 0 0 0 98% 917 0 0 917 586 1193 3136 415 0 0 42s 88% 100% Tn 31% 0 0 0 0 99% 675 0 0 675 462 554 4040 2610 0 0 1 90% 100% :s 28% 0 0 0 0 98% 718 0 0 718 396 990 2469 355 0 0 44s 92% 100% :s 30% 0 0 0 0 99% 555 1 0 556 181 965 2101 257 0 0 1 89% 100% :s 33% 0 0 0 0 99% 824 0 0 824 391 1268 3484 877 0 0 48s 97% 100% :s 29% 0 0 0 0 97% 2119 14 0 2133 2154 2389 5898 347 0 0 45s 93% 100% :f 50% 0 0 0 0
Is anyone using symbolic links/widelinks with their filer in a cifs environment?
Have you experienced any problems? Any suggestions?
Tried this once - Seemed to work well. Use the tool downloadable from Now.Netapp to create the link. Took a while to figure out how to get rid of the link when it was not required anymore (used priv set advanced mode). Currently not using any simlinks on CIFS anymore.
NOTICE - This communication is intended ONLY for the use of the person or entity named above and may contain information that is confidential or legally privileged. If you are not the intended recipient named above or a person responsible for delivering messages or communications to the intended recipient, YOU ARE HEREBY NOTIFIED that any use, distribution, or copying of this communication or any of the information contained in it is strictly prohibited. If you have received this communication in error, please notify us immediately by telephone and then destroy or delete this communication, or return it to us by mail if requested by us. The City of Calgary thanks you for your attention and co-operation.
________________________________
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Joel.Cooper@lvvwd.com Sent: Friday, November 30, 2007 9:45 AM To: owner-toasters@mathworks.com; toasters@mathworks.com Subject: symbolic links / wide links
Is anyone using symbolic links/widelinks with their filer in a cifs environment?
Have you experienced any problems? Any suggestions?
generally speaking it's snapshot or wafl doing work on meta data in the background. I don't think we have enough data yet. Can you capture a statit during the normal time, and the check pointed time? Just run 'priv set -q advanced ; statit -b' (wait 30 seconds) then run 'priv set -q advanced ; statit -e'. You'll have to > the output into a file (or copy & paste from the console) as there's a lot of output. I usually run this via rsh or ssh with key authentication from my admin host.
Capturing it during the problem and during normal ops will help a lot.
You can also track latency on your volumes just to get an idea this problems impact. stats show -i 1 -n 30 volume:$volname:read_latency volume:$volname:write_latency.
-Blake
On Nov 30, 2007 7:57 AM, Daryl Field djf@pipex.net wrote:
Hi Folks - we have some admittedly underpowered 270c in a mixed cifs/nfs environment. We are seeing a random performance issue whereby cp time is increased and the resultant nfs ops it serves drops considerably, resulting in an effective outage.
The question is when I look at the sysstat output collected during these periods a lot of time is spent checkpointing 'special files' - its been a while since I covered ontap theory so if someone could explain this in further detail it would be helpful. The now docs omit extended explanations.
Below is a sample of normal vs check pointing sysstat output.
Thanks,
Daryl
Normal
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s in out read write read write age hit time ty util in out 92% 1758 2 0 1760 495 2621 6404 0 0 0 41s 75% 0% - 55% 0 0 0 0 89% 2759 3 0 2762 781 6441 11140 6 0 0 22s 83% 0% - 68% 0 0 0 0 96% 2605 20 0 2625 1566 7347 12961 24 0 0 41s 78% 0% - 70% 0 0 0 0 97% 2224 3 0 2227 688 4856 8198 0 0 0 41s 84% 0% - 49% 0 0 0 0 92% 4027 2 0 4029 1860 5267 9829 12 0 0 14s 89% 0% - 72% 0 0 0 0 96% 2136 0 0 2136 535 1809 6405 17 0 0 42s 85% 0% - 64% 0 0 0 0 95% 2383 0 0 2383 642 3245 8769 0 0 0 42s 82% 0% - 67% 0 0 0 0 97% 2210 0 0 2210 595 3823 8309 1578 0 0 14s 89% 66% Tn 61% 0 0 0 0 85% 4889 0 0 4889 1881 5163 15287 7252 0 0 23s 85% 100% :s 100% 0 0 0 0 90% 2393 3 0 2396 817 10048 18018 2474 0 0 39s 89% 100% :s 70% 0 0 0 0 95% 1603 18 0 1621 913 4532 8437 4025 0 0 40s 88% 100% :v 54% 0 0 0 0 94% 2260 0 0 2260 1109 4061 8470 51 0 0 40s 82% 12% : 61% 0 0 0 0 88% 3532 3 1 3536 890 5716 11892 0 0 0 24s 83% 0% - 71% 0 0 0 0 92% 3347 2 0 3349 2197 6462 11603 20 0 0 23s 87% 0% - 68% 0 0 0 0 94% 3607 17 0 3624 1013 6285 11300 7 0 0 15s 85% 0% - 69% 0 0 0 0 95% 2359 7 0 2366 598 4574 8985 0 0 0 41s 84% 0% - 69% 0 0 0 0 94% 3981 0 0 3981 1098 4065 10324 23 0 0 39s 86% 0% - 74% 0 0 0 0 83% 3965 0 0 3965 946 4709 11296 0 0 0 39s 82% 0% - 83% 0 0 0 0 88% 3318 0 0 3318 1009 5220 11772 24 0 0 23s 83% 0% - 86% 0 0 0 0 88% 2821 0 0 2821 750 3974 9762 8 0 0 14s 79% 0% - 69% 0 0 0 0
Check Pointed
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s in out read write read write age hit time ty util in out 98% 1651 7 0 1658 669 2196 8671 15 0 0 49s 88% 0% - 44% 0 0 0 0 99% 567 0 0 567 176 1022 2601 3 0 0 46s 87% 0% - 41% 0 0 0 0 99% 614 0 0 614 205 1632 3408 8 0 0 1 89% 22% :n 27% 0 0 0 0 99% 929 1 0 930 252 1803 3865 237 0 0 44s 89% 100% Tn 32% 0 0 0 0 98% 463 0 0 463 385 1338 4066 1224 0 0 1 91% 100% :n 32% 0 0 0 0 98% 398 6 0 404 145 1894 5134 1787 0 0 1 81% 100% :s 34% 0 0 0 0 99% 558 0 0 558 186 1716 3675 912 0 0 50s 89% 100% :s 34% 0 0 0 0 99% 677 1 0 678 919 902 2892 587 0 0 1 87% 100% :s 33% 0 0 0 0 99% 703 1 0 704 293 1179 3173 731 0 0 52s 89% 100% :s 26% 0 0 0 0 97% 2250 17 0 2267 717 3024 10358 3462 0 0 52s 94% 100% :v 63% 0 0 0 0 98% 1032 0 0 1032 619 933 2482 81 0 0 49s 92% 11% : 33% 0 0 0 0 99% 875 1 0 876 293 2328 3818 6 0 0 44s 92% 0% - 33% 0 0 0 0 99% 325 0 0 325 116 579 1595 5 0 0 42s 91% 0% - 23% 0 0 0 0 99% 884 1 0 885 382 1613 3723 10 0 0 1 93% 38% :n 24% 0 0 0 0 98% 917 0 0 917 586 1193 3136 415 0 0 42s 88% 100% Tn 31% 0 0 0 0 99% 675 0 0 675 462 554 4040 2610 0 0 1 90% 100% :s 28% 0 0 0 0 98% 718 0 0 718 396 990 2469 355 0 0 44s 92% 100% :s 30% 0 0 0 0 99% 555 1 0 556 181 965 2101 257 0 0 1 89% 100% :s 33% 0 0 0 0 99% 824 0 0 824 391 1268 3484 877 0 0 48s 97% 100% :s 29% 0 0 0 0 97% 2119 14 0 2133 2154 2389 5898 347 0 0 45s 93% 100% :f 50% 0 0 0 0
Thanks for the response - actually being there when this happens is the problem, it appears to be quite irregular. However if I catch it I will update this thread with the information.
Daryl
Blake Golliher wrote:
generally speaking it's snapshot or wafl doing work on meta data in the background. I don't think we have enough data yet. Can you capture a statit during the normal time, and the check pointed time? Just run 'priv set -q advanced ; statit -b' (wait 30 seconds) then run 'priv set -q advanced ; statit -e'. You'll have to > the output into a file (or copy & paste from the console) as there's a lot of output. I usually run this via rsh or ssh with key authentication from my admin host.
Capturing it during the problem and during normal ops will help a lot.
You can also track latency on your volumes just to get an idea this problems impact. stats show -i 1 -n 30 volume:$volname:read_latency volume:$volname:write_latency.
-Blake
On Nov 30, 2007 7:57 AM, Daryl Field djf@pipex.net wrote:
Hi Folks - we have some admittedly underpowered 270c in a mixed cifs/nfs environment. We are seeing a random performance issue whereby cp time is increased and the resultant nfs ops it serves drops considerably, resulting in an effective outage.
The question is when I look at the sysstat output collected during these periods a lot of time is spent checkpointing 'special files' - its been a while since I covered ontap theory so if someone could explain this in further detail it would be helpful. The now docs omit extended explanations.
Below is a sample of normal vs check pointing sysstat output.
Thanks,
Daryl
Normal
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s in out read write read write age hit time ty util in out 92% 1758 2 0 1760 495 2621 6404 0 0 0 41s 75% 0% - 55% 0 0 0 0 89% 2759 3 0 2762 781 6441 11140 6 0 0 22s 83% 0% - 68% 0 0 0 0 96% 2605 20 0 2625 1566 7347 12961 24 0 0 41s 78% 0% - 70% 0 0 0 0 97% 2224 3 0 2227 688 4856 8198 0 0 0 41s 84% 0% - 49% 0 0 0 0 92% 4027 2 0 4029 1860 5267 9829 12 0 0 14s 89% 0% - 72% 0 0 0 0 96% 2136 0 0 2136 535 1809 6405 17 0 0 42s 85% 0% - 64% 0 0 0 0 95% 2383 0 0 2383 642 3245 8769 0 0 0 42s 82% 0% - 67% 0 0 0 0 97% 2210 0 0 2210 595 3823 8309 1578 0 0 14s 89% 66% Tn 61% 0 0 0 0 85% 4889 0 0 4889 1881 5163 15287 7252 0 0 23s 85% 100% :s 100% 0 0 0 0 90% 2393 3 0 2396 817 10048 18018 2474 0 0 39s 89% 100% :s 70% 0 0 0 0 95% 1603 18 0 1621 913 4532 8437 4025 0 0 40s 88% 100% :v 54% 0 0 0 0 94% 2260 0 0 2260 1109 4061 8470 51 0 0 40s 82% 12% : 61% 0 0 0 0 88% 3532 3 1 3536 890 5716 11892 0 0 0 24s 83% 0% - 71% 0 0 0 0 92% 3347 2 0 3349 2197 6462 11603 20 0 0 23s 87% 0% - 68% 0 0 0 0 94% 3607 17 0 3624 1013 6285 11300 7 0 0 15s 85% 0% - 69% 0 0 0 0 95% 2359 7 0 2366 598 4574 8985 0 0 0 41s 84% 0% - 69% 0 0 0 0 94% 3981 0 0 3981 1098 4065 10324 23 0 0 39s 86% 0% - 74% 0 0 0 0 83% 3965 0 0 3965 946 4709 11296 0 0 0 39s 82% 0% - 83% 0 0 0 0 88% 3318 0 0 3318 1009 5220 11772 24 0 0 23s 83% 0% - 86% 0 0 0 0 88% 2821 0 0 2821 750 3974 9762 8 0 0 14s 79% 0% - 69% 0 0 0 0
Check Pointed
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s in out read write read write age hit time ty util in out 98% 1651 7 0 1658 669 2196 8671 15 0 0 49s 88% 0% - 44% 0 0 0 0 99% 567 0 0 567 176 1022 2601 3 0 0 46s 87% 0% - 41% 0 0 0 0 99% 614 0 0 614 205 1632 3408 8 0 0 1 89% 22% :n 27% 0 0 0 0 99% 929 1 0 930 252 1803 3865 237 0 0 44s 89% 100% Tn 32% 0 0 0 0 98% 463 0 0 463 385 1338 4066 1224 0 0 1 91% 100% :n 32% 0 0 0 0 98% 398 6 0 404 145 1894 5134 1787 0 0 1 81% 100% :s 34% 0 0 0 0 99% 558 0 0 558 186 1716 3675 912 0 0 50s 89% 100% :s 34% 0 0 0 0 99% 677 1 0 678 919 902 2892 587 0 0 1 87% 100% :s 33% 0 0 0 0 99% 703 1 0 704 293 1179 3173 731 0 0 52s 89% 100% :s 26% 0 0 0 0 97% 2250 17 0 2267 717 3024 10358 3462 0 0 52s 94% 100% :v 63% 0 0 0 0 98% 1032 0 0 1032 619 933 2482 81 0 0 49s 92% 11% : 33% 0 0 0 0 99% 875 1 0 876 293 2328 3818 6 0 0 44s 92% 0% - 33% 0 0 0 0 99% 325 0 0 325 116 579 1595 5 0 0 42s 91% 0% - 23% 0 0 0 0 99% 884 1 0 885 382 1613 3723 10 0 0 1 93% 38% :n 24% 0 0 0 0 98% 917 0 0 917 586 1193 3136 415 0 0 42s 88% 100% Tn 31% 0 0 0 0 99% 675 0 0 675 462 554 4040 2610 0 0 1 90% 100% :s 28% 0 0 0 0 98% 718 0 0 718 396 990 2469 355 0 0 44s 92% 100% :s 30% 0 0 0 0 99% 555 1 0 556 181 965 2101 257 0 0 1 89% 100% :s 33% 0 0 0 0 99% 824 0 0 824 391 1268 3484 877 0 0 48s 97% 100% :s 29% 0 0 0 0 97% 2119 14 0 2133 2154 2389 5898 347 0 0 45s 93% 100% :f 50% 0 0 0 0
If you don't mind collecting a fair amount of information, you could just run a script for a while that collects statit for a while. Here's a quick script I use for this.
#!/bin/bash
RSH=/usr/bin/rsh LOGFILE=`date | awk '{print $1"_"$2"_"$3"_"$4}'` i=1
while [ "$i" -lt 10000 ] do date | awk '{print $1"_"$2"_"$3"_"$4}' >> $LOGFILE ssh $1 'priv set -q diag ; statit -b' >> $LOGFILE ssh $1 'priv set -q diag ; nfs_hist -z' >> $LOGFILE
## generic ssh $1 sysstat -c 30 -x -s 1 >> $LOGFILE
## output after 10 seconds $RSH $1 'priv set -q diag ; statit -e' >> $LOGFILE $RSH $1 'priv set -q diag ; nfs_hist' >> $LOGFILE done
That'll go for 1000 iterations (in the while loop). It needs rsh access to the file. You could run that until you capture the problem, then ctrl^c it. It's writes out all text, so it compresses really easy once it's done.
-Blake
On Nov 30, 2007 9:36 AM, Daryl Field djf@pipex.net wrote:
Thanks for the response - actually being there when this happens is the problem, it appears to be quite irregular. However if I catch it I will update this thread with the information.
Daryl
Blake Golliher wrote:
generally speaking it's snapshot or wafl doing work on meta data in the background. I don't think we have enough data yet. Can you capture a statit during the normal time, and the check pointed time? Just run 'priv set -q advanced ; statit -b' (wait 30 seconds) then run 'priv set -q advanced ; statit -e'. You'll have to > the output into a file (or copy & paste from the console) as there's a lot of output. I usually run this via rsh or ssh with key authentication from my admin host.
Capturing it during the problem and during normal ops will help a lot.
You can also track latency on your volumes just to get an idea this problems impact. stats show -i 1 -n 30 volume:$volname:read_latency volume:$volname:write_latency.
-Blake
On Nov 30, 2007 7:57 AM, Daryl Field djf@pipex.net wrote:
Hi Folks - we have some admittedly underpowered 270c in a mixed cifs/nfs environment. We are seeing a random performance issue whereby cp time is increased and the resultant nfs ops it serves drops considerably, resulting in an effective outage.
The question is when I look at the sysstat output collected during these periods a lot of time is spent checkpointing 'special files' - its been a while since I covered ontap theory so if someone could explain this in further detail it would be helpful. The now docs omit extended explanations.
Below is a sample of normal vs check pointing sysstat output.
Thanks,
Daryl
Normal
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s in out read write read write age hit time ty util in out 92% 1758 2 0 1760 495 2621 6404 0 0 0 41s 75% 0% - 55% 0 0 0 0 89% 2759 3 0 2762 781 6441 11140 6 0 0 22s 83% 0% - 68% 0 0 0 0 96% 2605 20 0 2625 1566 7347 12961 24 0 0 41s 78% 0% - 70% 0 0 0 0 97% 2224 3 0 2227 688 4856 8198 0 0 0 41s 84% 0% - 49% 0 0 0 0 92% 4027 2 0 4029 1860 5267 9829 12 0 0 14s 89% 0% - 72% 0 0 0 0 96% 2136 0 0 2136 535 1809 6405 17 0 0 42s 85% 0% - 64% 0 0 0 0 95% 2383 0 0 2383 642 3245 8769 0 0 0 42s 82% 0% - 67% 0 0 0 0 97% 2210 0 0 2210 595 3823 8309 1578 0 0 14s 89% 66% Tn 61% 0 0 0 0 85% 4889 0 0 4889 1881 5163 15287 7252 0 0 23s 85% 100% :s 100% 0 0 0 0 90% 2393 3 0 2396 817 10048 18018 2474 0 0 39s 89% 100% :s 70% 0 0 0 0 95% 1603 18 0 1621 913 4532 8437 4025 0 0 40s 88% 100% :v 54% 0 0 0 0 94% 2260 0 0 2260 1109 4061 8470 51 0 0 40s 82% 12% : 61% 0 0 0 0 88% 3532 3 1 3536 890 5716 11892 0 0 0 24s 83% 0% - 71% 0 0 0 0 92% 3347 2 0 3349 2197 6462 11603 20 0 0 23s 87% 0% - 68% 0 0 0 0 94% 3607 17 0 3624 1013 6285 11300 7 0 0 15s 85% 0% - 69% 0 0 0 0 95% 2359 7 0 2366 598 4574 8985 0 0 0 41s 84% 0% - 69% 0 0 0 0 94% 3981 0 0 3981 1098 4065 10324 23 0 0 39s 86% 0% - 74% 0 0 0 0 83% 3965 0 0 3965 946 4709 11296 0 0 0 39s 82% 0% - 83% 0 0 0 0 88% 3318 0 0 3318 1009 5220 11772 24 0 0 23s 83% 0% - 86% 0 0 0 0 88% 2821 0 0 2821 750 3974 9762 8 0 0 14s 79% 0% - 69% 0 0 0 0
Check Pointed
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s in out read write read write age hit time ty util in out 98% 1651 7 0 1658 669 2196 8671 15 0 0 49s 88% 0% - 44% 0 0 0 0 99% 567 0 0 567 176 1022 2601 3 0 0 46s 87% 0% - 41% 0 0 0 0 99% 614 0 0 614 205 1632 3408 8 0 0 1 89% 22% :n 27% 0 0 0 0 99% 929 1 0 930 252 1803 3865 237 0 0 44s 89% 100% Tn 32% 0 0 0 0 98% 463 0 0 463 385 1338 4066 1224 0 0 1 91% 100% :n 32% 0 0 0 0 98% 398 6 0 404 145 1894 5134 1787 0 0 1 81% 100% :s 34% 0 0 0 0 99% 558 0 0 558 186 1716 3675 912 0 0 50s 89% 100% :s 34% 0 0 0 0 99% 677 1 0 678 919 902 2892 587 0 0 1 87% 100% :s 33% 0 0 0 0 99% 703 1 0 704 293 1179 3173 731 0 0 52s 89% 100% :s 26% 0 0 0 0 97% 2250 17 0 2267 717 3024 10358 3462 0 0 52s 94% 100% :v 63% 0 0 0 0 98% 1032 0 0 1032 619 933 2482 81 0 0 49s 92% 11% : 33% 0 0 0 0 99% 875 1 0 876 293 2328 3818 6 0 0 44s 92% 0% - 33% 0 0 0 0 99% 325 0 0 325 116 579 1595 5 0 0 42s 91% 0% - 23% 0 0 0 0 99% 884 1 0 885 382 1613 3723 10 0 0 1 93% 38% :n 24% 0 0 0 0 98% 917 0 0 917 586 1193 3136 415 0 0 42s 88% 100% Tn 31% 0 0 0 0 99% 675 0 0 675 462 554 4040 2610 0 0 1 90% 100% :s 28% 0 0 0 0 98% 718 0 0 718 396 990 2469 355 0 0 44s 92% 100% :s 30% 0 0 0 0 99% 555 1 0 556 181 965 2101 257 0 0 1 89% 100% :s 33% 0 0 0 0 99% 824 0 0 824 391 1268 3484 877 0 0 48s 97% 100% :s 29% 0 0 0 0 97% 2119 14 0 2133 2154 2389 5898 347 0 0 45s 93% 100% :f 50% 0 0 0 0
Is there a way to change the command-line output display format in Data ONTAP? I plan to write some scripts to collect some data from certain commands, but the way things are output (i.e. splitting things up into multiple lines) makes parsing somewhat difficult. For example:
ARRAY*> lun show -m LUN path Mapped to LUN ID Protocol ----------------------------------------------------------------------- /vol/LUNS/11 servera 11 FCP serverb 11 FCP /vol/LUNS/12 serverc 12 FCP serverd 12 FCP
Is there a way to make this simpler so that each line has all the details? It wouldn't be as nice to look at, but scripting would definitely be easier.
Brian Beaird
Hi Brian You want our SDK - Manage ONTAP.
It provides API calls for a couple OO languages, including Perl, and the output is reurned in XML. There's even a little Windows applet called zexplorer that lets you browse API calls and look at the XML output.
Best bet is to contact your NetApp account team and have them hook up your NOW account to get access (no charge).
Enjoy!
Peter
-----Original Message----- From: Brian.Beaird@cat.com [mailto:Brian.Beaird@cat.com] Sent: Friday, November 30, 2007 12:37 PM To: toasters@mathworks.com Subject: ONTAP Command Output Format
Is there a way to change the command-line output display format in Data ONTAP? I plan to write some scripts to collect some data from certain commands, but the way things are output (i.e. splitting things up into multiple lines) makes parsing somewhat difficult. For example:
ARRAY*> lun show -m LUN path Mapped to LUN ID Protocol ----------------------------------------------------------------------- /vol/LUNS/11 servera 11 FCP serverb 11 FCP /vol/LUNS/12 serverc 12 FCP serverd 12 FCP
Is there a way to make this simpler so that each line has all the details? It wouldn't be as nice to look at, but scripting would definitely be easier.
Brian Beaird
You have try using the APIs: http://www.netapp.com/solutions/dfms/advantage.html. I think anyone can sign up for it now.
Or you can mine the information from SNMP. Look at /etc/mib/netapp.mib and /etc/mib/icsci.mib.
- Rick -
-----Original Message----- From: Brian.Beaird@cat.com [mailto:Brian.Beaird@cat.com] Sent: Friday, November 30, 2007 12:37 PM To: toasters@mathworks.com Subject: ONTAP Command Output Format
Is there a way to change the command-line output display format in Data ONTAP? I plan to write some scripts to collect some data from certain commands, but the way things are output (i.e. splitting things up into multiple lines) makes parsing somewhat difficult. For example:
ARRAY*> lun show -m LUN path Mapped to LUN ID Protocol
/vol/LUNS/11 servera 11 FCP serverb 11 FCP /vol/LUNS/12 serverc 12 FCP serverd 12 FCP
Is there a way to make this simpler so that each line has all the details? It wouldn't be as nice to look at, but scripting would definitely be easier.
Brian Beaird
Have you tried stats show lun ??? I don't have lun's so it's hard for me to tell, but I've found this format useful for parsing stuff (not just getting stats).
--- Brian.Beaird@cat.com wrote:
Is there a way to change the command-line output display format in Data ONTAP? I plan to write some scripts to collect some data from certain commands, but the way things are output (i.e. splitting things up into multiple lines) makes parsing somewhat difficult. For example:
ARRAY*> lun show -m LUN path Mapped to LUN ID Protocol
-----------------------------------------------------------------------
/vol/LUNS/11 servera 11 FCP serverb 11 FCP /vol/LUNS/12 serverc 12 FCP serverd 12 FCP
Is there a way to make this simpler so that each line has all the details? It wouldn't be as nice to look at, but scripting would definitely be easier.
Brian Beaird
____________________________________________________________________________________ Be a better sports nut! Let your teams follow you with Yahoo Mobile. Try it now. http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ