Hi!
Anyone here have advice on distributed nfs benchmarking? I have a netapp GX system, and a few other distributed nfs clusters that I'm trying to figure out how to benchmark. I have a fair number of clients, linux and bsd, and I'm just looking for what others do in this situation. My needs range from HPC type stuff, to just a generic nfs workload for web hosting, do OLTP database on 4 nodes (Oracle 10g RAC).
I know no synthetic benchmark is better then the real application. My group at my company is charged with evaluating storage before we put bother the application guys with running their apps on it.
I've explored iozone, but it seems to only coordinate runs across hosts, I then have to aggregate the data on my own. I'm looking for a more integrated tool. One that I can load the binary on, then start it from a control host, it starts a workload on a list of systems, and they report back to the control host on performance data. Workloads should include lots of small file creates/reads/access/deletes and large file type stuff as well (like iozone).
I'm thinking I'll probably have to task a tools team for this, but I was hoping for something from the OSS community. I'd pay for it as well, but source access would be a nice feature (for my organization, not for me -- it's all moon man language to me).
Any advice is appreciated, and if you reply you may do so off the list. I'll compile my answers, and reply to the list. I'll also document my findings on my storage administration blog, filerjedi.com.
Thanks! -Blake