Grab the output of statit for 30sec during this write test.
I have an idea, but need statit data to nail it down.
On Mon, Oct 21, 2013 at 7:44 AM, Jordan Slingerland < Jordan.Slingerland@independenthealth.com> wrote:
Hmm, interesting question. I am curious to see what the results are.****
It sounds like you ruled out that it could be reads from some other operation like such as:****
Snapmirror/snapvault****
Sync /semi-sync snapmirror****
Raid scrub****
Wafl scan****
Reallocation****
Another thought, not sure if fragmentation could cause that behavior, but do you have a very full aggregate? ****
-JMS****
*From:* toasters-bounces@teaparty.net [mailto: toasters-bounces@teaparty.net] *On Behalf Of *Alon Zeltser *Sent:* Monday, October 21, 2013 10:28 AM *To:* toasters@teaparty.net *Subject:* unexplained read operation****
Hello toasters****
I encounter rather strange situation ****
I have a system FAS2040 with 1 aggr of 23x450g raid group ontap version 8.1.3p2****
Every write operation being generated on this aggregate automatically caused a read operation ****
When I try to create 100% sequential write operation using tools such as sio_ntap or filersio ****
Instead of the expected 150Mb throughput I get 100MB write and 50MB read *
On another FAS2040 system using the same commands I get 100% write and no reads****
As you can see****
filersio asyncio_active 0 -r 0 64k 0 15g 60 1 /vol/test/testfile****
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk OTHER FCP iSCSI FCP kB/s iSCSI kB/s****
93% 0 0 0 5 26 1 *24242 118671 * 0 0 0s 99% 72% Fn 43% 0 0 5 0 0 24 0****
90% 0 0 0 4 4 1 *40959 102501 * 0 0 19 100% 89% Fn 43% 1 0 3 0 0 2 0****
86% 0 0 0 3 3 1 *23126 125022 * 0 0 0s 99% 69% : 59% 0 0 3 0 0 2 0****
93% 0 0 0 7 1 1 *28200 135740 * 0 0 0s 99% 97% Ff 48% 7 0 0 0 0 0 0****
92% 0 0 0 6 15 1 * 48764 115324 * 0 0 0s 99% 91% Ff 52% 0 0 6 0 0 14 0****
90% 0 0 0 6 16 1 * 36148 156492 * 0 0 0s 99% 83% Ff 56% 0 0 6 0 0 14 0****
95% 0 0 0 0 4 4 *32784 114555 * 0 0 0s 99% 77% Fs 48% 0 0 0 0 0 0 0****
92% 0 0 0 18 90 1*2 34931 104059 * 0 0 0s 99% 88% Fn 42% 0 0 18 0 0 79 0****
88% 0 0 0 8 3 1 *20463 124800 * 0 0 0s 100% 83% Fn 45% 5 0 3 0 0 2 0****
89% 0 0 0 2 2 1 *34050 112166 * 0 0 0s 99% 87% Fn 44% 0 0 2 0 0 1 0****
92% 0 0 0 3 7 1 *33025 112816 * 0 0 0s 99% 88% Fn 46% 0 0 3 0 0 5 0****
91% 0 0 0 3 3 1 *30929 130833 * 0 0 0s 100% 73% : 50% 0 0 3 0 0 2 0****
92% 0 0 0 7 24 2 * 36484 134260 * 0 0 0s 100% 98% Ff 50% 0 0 7 0 0 22 0****
86% 0 0 0 6 1 1 *38608 125140 * 0 0 0s 99% 90% Ff 44% 5 0 1 0 0 1 0****
93% 0 0 0 4 3 1 *28748 104544 * 0 0 0s 100% 75% Ff 43% 0 0 4 0 0 2 0****
87% 0 0 0 5 18 1 * 28128 158608 * 0 0 0s 100% 90% Fs 55% 0 0 5 0 0 17 0****
93% 0 0 0 2 3 34 * 33115 104980 * 0 0 0s 99% 85% Fn 41% 0 0 2 0 0 1 32****
84% 0 0 0 5 28 7 * 37185 109425 * 0 0 0s 100% 78% Fn 48% 0 0 5 0 0 21 0****
91% 0 0 0 9 3 1 *30224 118322 * 0 0 0s 99% 97% Fn 47% 7 0 2 0 0 1 0****
89% 0 0 0 2 3 4 *27649 118335 * 0 0 0s 99% 88% : 45% 0 0 2 0 0 1 0****
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk OTHER FCP iSCSI FCP kB/s iSCSI kB/s****
in out read write read
write age hit time ty util in out in out****
88% 0 0 0 14 57 39 * 43284 13348*0 0 0 0s 100% 98% Ff 50% 0 0 14 0 0 52 0****
92% 0 0 0 2 6 16 *47936 109432* 0 0 0s 99% 86% Ff 46% 0 0 2 0 0 1 0****
88% 0 0 0 23 86 3 * 45340 141014* 0 0 0s 99% 73% Ff 55% 3 0 20 0 0 82 0****
92% 0 0 0 10 5 4 *21787 123980* 0 0 0s 99% 80% Fn 44% 7 0 3 0 0 1 0****
95% 0 0 0 2 4 3 *34751 98158 * 0 0 0s 99% 82% Fn 44% 0 0 2 0 0 1 0****
91% 0 0 0 1 3 1 *35022 118549 * 0 0 0s 100% 60% Fn 48% 0 0 1 0 0 0 0****
92% 0 0 0 87 3 1 *25455 123066* 0 0 0s 100% 80% Fn 47% 84 0 3 0 0 1 0****
83% 0 0 0 8 32 2 *36956 121303* 0 0 0s 100% 88% : 46% 0 0 8 0 0 30 0****
85% 0 0 0 5 2 1 *62064 140088 * 0 0 0s 100% 92% Ff 59% 5 0 0 0 0 0 0****
95% 0 0 0 3 3 1 *39352 117364 * 0 0 0s 99% 71% Ff 48% 0 0 3 0 0 2 0****
94% 0 0 0 3 3 1 *42008 136943 * 0 0 0s 99% 94% Fs 53% 0 0 3 0 0 2 0****
92% 0 0 0 0 2 7 *26147 111286 * 0 0 0s 99% 58% Fn 44% 0 0 0 0 0 0 0****
90% 0 0 0 3 3 1 *29567 110665 * 0 0 20 99% 87% Fn 44% 0 0 3 0 0 1 0****
90% 0 0 0 9 4 1 *40031 111273 * 0 0 16 99% 86% Fn 43% 5 0 4 0 0 2 0****
84% 0 0 0 0 1 1 *36760 105709 * 0 0 0s 100% 84% Fn 43% 0 0 0 0 0 0 0****
92% 0 0 0 3 3 1 *31033 137891 * 0 0 0s 100% 99% : 53% 0 0 3 0 0 2 0****
83% 0 0 0 4 6 1 *32991 140148 * 0 0 0s 100% 89% F 51% 0 0 4 0 0 5 0****
Any thoughts why this could happens ?****
Nothing else is running on this netapp except for this test ****
Thanks****
Alon Z****
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters