I don't know about CrystalDiskMark as well, but I can definitely agree with Jeff on IOZone.  IOZone is VERY useful and configurable.  I've used it in the past to benchmark our WAN latencies for remote filers, local disk, as well as intranet filer performance with cache flushes and file handle drops.  Output is nice and easily graphable too...

On Sat, Mar 20, 2010 at 6:33 PM, Kennedy, Jeffrey <jkennedy@qualcomm.com> wrote:

I don’t know what this CrystalDiskMark offers that others don’t but iozone has proven to be very flexible and has options that force file handle closes so you avoid the cache issue altogether.

 

Jeff Kennedy

Qualcomm, Incorporated

QCT Engineering Compute

858-651-6592

 

"I cannot undertake to lay my finger on that article of the Constitution

which granted a right to Congress of expending, on objects of benevolence,

the money of their constituents."

-James Madison on the appropriation of $15,000 by Congress to help French refugees

 

From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Blake Golliher
Sent: Friday, March 19, 2010 11:35 PM
To: Timothy Naple


Cc: Suresh Rajagopalan; Toasters List
Subject: Re: I/O benchmarking

 

How much memory is in the local host?  You might be caching the entire workload in the fs cache on the host.  Can you try a larger working set?  I usually try to shoot for a data set 3x the memory footprint of the local system.  That way you are sure to have flushes to disk.  

 

Of course you should also loom to model your benchmark after your real world workload as much as possible as well.  How close is your benchmark too to your real workload?

 

-Blake

Typed with my thumbs!


On Mar 19, 2010, at 10:25 PM, "Timothy Naple" <tnaple@BERKCOM.com> wrote:

Suresh,

 

Some critical information is the model of the disks in both the filer and server as well as the cache in the server’s RAID controller which I can lookup if you confirm the model.  If you want to forward me an autosupport from the filer that would answer a ton of questions.  Is the server’s FC HBA connected via a switch to the filer or directly to a target port on the filer?  Which driver are you using on the Emulex in the server and which model is it?  Any multipathing?  I can take these offline if you don’t want to cc the list with all this info and then just report back when we figure this out.

 

Thank you,

Tim

 

From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com]
Sent: Friday, March 19, 2010 9:57 PM
To: Timothy Naple
Cc: Toasters List
Subject: RE: I/O benchmarking

 

I ran the test with default variables (100Mb file 5 tests, -- sequential, 512k random and 4k random).   Tests were done on a  DL785g6 with 6 disks in raid1.  I believe the controller is a P400.  The HBA is a Emulex connected to a 6030 filer running 7.2.6.1.  This particular LUN is connected to a 56 disk aggregate, there are about 140 disks on that filer.  I will post some numbers later on.  

 

Suresh

 

 

From: Timothy Naple [mailto:tnaple@BERKCOM.com]
Sent: Friday, March 19, 2010 9:21 PM
To: Suresh Rajagopalan
Cc: Toasters List
Subject: RE: I/O benchmarking

 

Suresh,

 

Performance benchmarking is a science that involves many variables.  I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison.  Can you add some more details about your configuration?  (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc).  A lot of detail is going to be required to make any headway or recommendations for a valid test.

 

Thank you,

Tim

 

From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Suresh Rajagopalan
Sent: Friday, March 19, 2010 8:55 PM
To: Toasters List
Subject: I/O benchmarking

 

I’m using the free tool Crystaldiskmark to do some  I/O comparison between local disk and our filers. On at least one system  (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate.   I’m kind of stumped on this one, and would like to know if:

 

a)      Are there any other commonly used benchmarks which I can try with the filers? 

b)      This is on a 2G FC SAN.  How much improvement can I expect with 4G or 8G?

 

Thanks

Suresh