It's because of two things.
1). Rg size is not directly tied to performance itself. The total number of data drives per _aggregate_ is. A system with four 4+2 rg's in an aggregate and a system with a single 16+2 aggregate are within a verrrry narrow margin of each other performance wise. The only difference being slightly more overhead for every additional raid stripe to manage In the 4(4+2) example. Raid groups do not drive io. They provide resiliency.
2). What value do you want? (not directed to Peter, just in general) I could lay it out in zebras per railroad car..but that's not your workload, is it?
Top Down: 1] Aggregates provide physical performance. -Size them for either throughput in MB/sec, or Physical IO's/sec. One is a want, the other is a need. Your business planning determines that relationship. 2] Volumes provide user space to do work in. The workload within them is generally limited by the capability of the aggregate layer beneath them. 3] Raid groups provide firewalls of data protection in the multi-family unit called the aggregate. They don't inhibit the entry or exit of workload into it as long as the block everyone lives is in constructed and managed responsibly as you add units to it. (reallocate, etc..because sometimes the tenant workload leaves a mess behind) 4] Your unique dataset with YOUR unique workload applied to it will result in...a unique result in every metric you could possibly measure, and want NetApp to provide a blanket answer to.
..see the problem with "we can't really say..."? I mean other that more = mo-better.
Sent from my iThingie
On Oct 30, 2011, at 18:30, "Peter D. Gray" pdg@uow.edu.au wrote:
Surely there is no sensible answer to this question, which is why netapp refuse to quantify.
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters