hi,
in the last month we have the following average Values of the 16 Disks (DFM Report) :
Read Ops/sec: 90
write Ops/sec: 4
Throughput Blocks/sec: 410
Busy (%): 30%
We don't have any intensive I/O VM's, but we'll buy a second shelf, too. After starting a the backup process (via NFS), it looks like this:
chip4> sysstat -c 10 -u 1
 CPU   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk
       ops/s      in    out    read  write    read  write    age    hit  time  ty  util
 13%    2069   10553  54047   54364      0       0      0     3s    89%    0%  -    73%
 15%    2153   11929  57947   59072      0       0      0     3s    90%    0%  -    78%
 30%    3717   55428  53787   54260     32       0      0     7s    89%    0%  -    80%
 14%    1958    8163  54683   54284      0       0      0     6s    92%    0%  -    62%
 10%    1748    4542  59106   56584      0       0      0     6s    92%    0%  -    58%
 20%    1120    2840  39477   57432  61508       0      0     2     98%   61%  T    87%
 18%    1367    4342  51629   52008  57672       0      0     2     98%  100%  :    77%
 14%    1686    6793  57079   55528  29748       0      0     2     92%   64%  :    73%
 10%    1390    2445  45892   44252     24       0      0     2     91%    0%  -    60%
  9%    1522    3233  57545   54080      0       0      0     2     92%    0%  -    52%
chip4> sysstat -c 10 1
 CPU     NFS    CIFS    HTTP     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache
                                  in    out    read  write    read  write    age
 11%    2296       0       0    2973  76892   54128     32       0      0     6s
 18%    4346       0       0    4197  71055   77452      0       0      0     6s
 15%    4373       0       0    2398  77533   65772      0       0      0     6s
 20%    2104       0       0    2507  46679   47012  35220       0      0     4
 12%    2464       0       0    2348  57280   46576  20636       0      0     4
 11%    2699       0       0    3298  60688   43872      8       0      0     4
 11%    2317       0       0    3760  55468   48212     24       0      0     4
 13%    2940       0       0    3645  66395   50112      0       0      0     6s
 12%    3124       0       0    2774  70399   46052      0       0      0     6s
 11%    2602       0       0    2888  60654   37088     24       0      0     6s
greets
Steffen
Von: Sto Rage© [mailto:netbacker@gmail.com] 
Gesendet: Mittwoch, 26. September 2012 01:34
An: Steffen Knauf
Cc: toasters@teaparty.net
Betreff: Re: Raidgroupsize and I/O Performance
Take a look at http://media.netapp.com/documents/tr-3801.pdf first before you add the PAM card to see if it will make a difference in your environment.
On Tue, Sep 25, 2012 at 1:19 AM, Steffen Knauf <sknauf@chipxonio.de> wrote:
Hello,
 
i'll try to improve our I/0 Perfomance. We have a raidgroup with 16 disks (SAS), 1 aggregate and 1 volume (dedup enabled). The volume is the storage for 100 VM's on the VMware Cluser (access via NFS). Does it make sense to increase the raidgroup?  90% Percent of the Disk I/O are Read Ops, so i'll buy a PAM Card for our FAS3240,too.
 
And what's your Experience with the Raidgroupsize of a Raidgroup with SATA Disks (now: 11+1)?
 
Thanks and greets !
 
Steffen
 
 
 
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters