There are a few things that are fundamentally wrong with this test.
First the file you are testing with is smaller than physical memory. This now becomes a test of NTFS/Disk memory caching not of real world performance. I would expect the file size being used to be several times larger than the physical memory in the machine. This is shown with the 100MB file write times where the disk controller looks like it is holding the file in cache and reporting back to the host it has completed hence the strange figures of 300MB/sec writes. This is just not what happens in the real world unless you are very lucky with your data and every one is writing to the same 100Mb of data. The other part here for the 1000Mb file writes is how much data does the test software write to the disk as again alot of this could be being held in cache on the disk controller.
If you are looking to test real world performance then you need some software that performs a mix of workloads all at the same time. It should use mixed size reads and writes and the test should last for many minutes to ensure that all caches are maxed out.
In the typical environment the only times a single host really hits a san hard is during backups as that is the time it is sequentially reading. The rest of the time it is usually the disks that are being hit hard with seeks. Now it could be the case that your planned application doesnt do this but it would be unusual if that was the case.
The peak figures of 160-170MB/Sec for a filer volume looks around a 2Gig san maxed out. You could confirm this by looking at the SAN switch ports whilst you are performing the test and see what sort of traffic levels they are at. The other question then is what else is connected to the SAN and to the filer and what workload are they performing when you are doing the tests?
What is the actual workload you expect this system to perform and then you might start to find some performance tests that mirror this workload rather than trying to test it like you would with a PC.
On 22 Mar 2010, at 00:27, Blake Golliher wrote:
Do you have any data from the filer during the benchmark run?
-Blake
On Mar 21, 2010, at 4:51 PM, "Suresh Rajagopalan" SRajagopalan@williamoneil.com wrote:
Here are some numbers from crystaldiskmark. On crystaldiskmark the max file size is 1000Mb. The host has 64G of RAM and 8 six-core processors.
DL785G6, 100Mb file local disk
a. Seq Read 216.5MB/sec Write 78.4MB/sec
b. Random 512k Read 58.14MB/sec Write 301.9MB/sec
c. Random 4k Read 26.2MB/sec Write 41.3MB/sec
DL785G6 100MB file on Filer LUN (NTFS)
a. Seq Read 175.1MB/sec Write 100.7MB/sec
b. Random 512k Read 103.7MB/sec Write 71.44MB/sec
c. Random 4k Read 15.7MB/sec Write 7.6MB/sec
DL785G6 1000MB file on local disk
a. Seq Read 236.8MB/sec Write 92.7MB/sec
b. Random 512k Read 49.71MB/sec Write 217.2MB/sec
c. Random 4k Read 1.33MB/sec Write 20.63
DL785G6 1000Mb file on filer LUN (NTFS)
a. Seq Read 164MB/sec Write 98.8MB/sec
b. Random 512k Read 101.1MB/sec Write 63.2MB/sec
c. Random 4k Read 13.9MB/sec Write 7.8MB/sec
Suresh
From: Timothy Naple [mailto:tnaple@BERKCOM.com] Sent: Friday, March 19, 2010 9:21 PM To: Suresh Rajagopalan Cc: Toasters List Subject: RE: I/O benchmarking
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you,
Tim
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Suresh Rajagopalan Sent: Friday, March 19, 2010 8:55 PM To: Toasters List Subject: I/O benchmarking
I’m using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I’m kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh