That's not just a NetApp thing... that's performance 101!
There will ALWAYS be a bottleneck somewhere - if you want to 'fix' a
performance problem, find the first bottleneck, and fix it. Then the
next, and so on. Keep going until the performance is adequate...
I've seen very few cases where the loop itself gets saturated before the
head gives up, or the memory runs out... but it can happen. I expect
that it will happen even more quickly with the new 6000 series given the
sheer amount of memory\nvram\cpu. Then again, 500TB of disks is a LOT
of loop too :)
Glenn
-----Original Message-----
From: Brosseau, Paul [mailto:Paul.Brosseau@netapp.com]
Sent: Wednesday, October 04, 2006 8:17 PM
To: Glenn Walker; Suresh Rajagopalan
Cc: toasters(a)mathworks.com
Subject: RE: Estimating Aggregate IOPS
Good point Glenn on the number of disks. Once the number of IOPS on a
loops generates enough throughput to saturate the loop there's not much
point in adding more disks. Spreading the load across multiple loops
helps. How does the adage go " go wide then go shallow then go deep ".
Eventually you saturate something somewhere depending on the filer model
.
Paulb
-----Original Message-----
From: Glenn Walker [mailto:ggwalker@mindspring.com]
Sent: Wednesday, October 04, 2006 6:57 PM
To: Suresh Rajagopalan
Cc: toasters(a)mathworks.com
Subject: RE: Estimating Aggregate IOPS
There is a ceiling in performance gain after a certain number of disks -
it's well below 112, but I'm pretty sure it's above 56 (though not much
above it). At that point, the 'curve' starts to go back down again.
However, RAID-DP and RAID-4 (it's poor, homely cousin) are very fast and
the filer has always handled the XORs very well for the parity
calculations.
A better question\way to position this might be what type of
workload\performance you expect\require. While the disks are capable of
about 120 IOPS for 10k and 180 IOPS for 15K disks, the latencies begin
to jump at the end of the scale - if you're workload requires lower
latencies, it doesn't much matter what the disk is capable of (Exchange
is a very good example of this - 80 IOPS for 10k drives is about the max
you'd want to go).
Glenn
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Suresh Rajagopalan
Sent: Wednesday, October 04, 2006 3:56 PM
Cc: toasters(a)mathworks.com
Subject: RE: Estimating Aggregate IOPS
Let's assume 4k IOPS. Disks are currently 10k rpm.
Are you saying that NTAP's implementation of RAID4 (or RAID-DP) gives a
linear performance increase with spindles, (almost like RAID-0) and no
penalty for the RAID4 or RAID-DP?
That is, aggregate IOPS is pretty much (N * disk-iops) and no penalty
for RAID4 or RAID-DP?
Thanks
Suresh
-----Original Message-----
From: Blake Golliher [mailto:thelastman@gmail.com]
Sent: Wednesday, October 04, 2006 12:41 PM
To: Suresh Rajagopalan
Cc: toasters(a)mathworks.com
Subject: Re: Estimating Aggregate IOPS
How fast are the spindles, and what size is your iop? Generally, you
can assume 110 per 10k RPM disk, and 180 for 15k RPM disk. This is also
assuming a 10ms latency for each iop. And subtract 2 disks per raid
group for raid dp (if doing writes, don't if you are doing pure reads).
So with around 50 disks (after you subtract the raid dp
overhead) you can expect around 5500 iops from that set of disks
(assuming 10k rpm disks). For the 112 aggregate, you an expect 98
spindles and 10780 iops. I'm assuming 4k iops, and 10ms latency.
Hope that helps,
-Blake
On 10/4/06, Suresh Rajagopalan <SRajagopalan(a)williamoneil.com> wrote:
> Given a disk IOPS of 100, I'd like to estimate total aggregate IOPS
for
> the following cases:
>
> 1) 56 disks, 1 aggregate, RAID-DP size 16
> 2) 112 disks, 1 aggregate, RAID-DP size 16
>
> I'm only interested in the total raw disk IOPS available in each case,
> not including considering the filer head. For example, we know
that
> RAID-0 with 56 disks @100 would yield 5600 IOPS.
>
> I don't know how to do this calculation with Data ONTAP's
implementation
> of RAID4 or RAID-DP.
>
> Any assistance would help.
>
> Thanks
> Suresh
>
>
>
>
>