You can't know if you got it right unless you have an exact simulation of your particular ops mix and traffic patterns. Since you don't have that, then basically you have to use rules of thumb and guess and adjust when needed. Your environment changes over time, too, so what works one day may not work another.
Mmm, all of which makes me lean toward an analytical model, as it should be more powerful in dealing with these imponderables: plug in the numbers and see how it comes out. Real testing and simulation would make useful checks against such a model, natch.
If you can excuse my two cents worth - while I admire your quest for the perfect analytical model in terms of a raid system, it reminds me of the argument of "which is the best operating system". The answer is "for what?" To me there seems too many variables to put into an equation. For instance, most people deal with files of all differing sizes - so do you equate for mean, average, or extreme file sizes? Also, there is the issue of differing file systems and their performance issues - NFS (all versions), CIFS, etc.. Also, what about the question of how many concurrent users? Again, means or extremes? Chances are they could be using different file systems and file sizes too. And there are more factors to throw into the mix...
I would suggest the better question would be "what is the wrong way to configure a filer" so you can have the rules to eliminate what hinders efficiency of a filer in all situations and then fine tune by what you think are the relevant variables for your environment.
Of course, that said it would be great if you did find the unified theory of filers ;)
----------- Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO