In perf advisor under Overall IOPS by OPtype there is read, write, and other. Anyone know what "other_ops" consists of?
From the ever helpful Help tool on perf advisor:
Volume object
other_ops Number of other operations per second to the volume
Thanks.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
"I cannot undertake to lay my finger on that article of the Constitution which granted a right to Congress of expending, on objects of benevolence, the money of their constituents." -James Madison on the appropriation of $15,000 by Congress to help French refugees
Hi Jeff,
On Wed, Mar 17, 2010 at 02:30, Kennedy, Jeffrey jkennedy@qualcomm.comwrote:
In perf advisor under Overall IOPS by OPtype there is read, write, and other. Anyone know what “other_ops” consists of?
If my memory serves me correctly, other ops are any ops that are no read/write, such as file locking (NFS/CIFS), reservations, etc.
Greetings,
Nils
I'm using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I'm kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you,
Tim
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Suresh Rajagopalan Sent: Friday, March 19, 2010 8:55 PM To: Toasters List Subject: I/O benchmarking
I'm using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I'm kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh
I ran the test with default variables (100Mb file 5 tests, -- sequential, 512k random and 4k random). Tests were done on a DL785g6 with 6 disks in raid1. I believe the controller is a P400. The HBA is a Emulex connected to a 6030 filer running 7.2.6.1. This particular LUN is connected to a 56 disk aggregate, there are about 140 disks on that filer. I will post some numbers later on.
Suresh
From: Timothy Naple [mailto:tnaple@BERKCOM.com] Sent: Friday, March 19, 2010 9:21 PM To: Suresh Rajagopalan Cc: Toasters List Subject: RE: I/O benchmarking
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you,
Tim
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Suresh Rajagopalan Sent: Friday, March 19, 2010 8:55 PM To: Toasters List Subject: I/O benchmarking
I'm using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I'm kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh
Suresh,
Some critical information is the model of the disks in both the filer and server as well as the cache in the server's RAID controller which I can lookup if you confirm the model. If you want to forward me an autosupport from the filer that would answer a ton of questions. Is the server's FC HBA connected via a switch to the filer or directly to a target port on the filer? Which driver are you using on the Emulex in the server and which model is it? Any multipathing? I can take these offline if you don't want to cc the list with all this info and then just report back when we figure this out.
Thank you,
Tim
From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com] Sent: Friday, March 19, 2010 9:57 PM To: Timothy Naple Cc: Toasters List Subject: RE: I/O benchmarking
I ran the test with default variables (100Mb file 5 tests, -- sequential, 512k random and 4k random). Tests were done on a DL785g6 with 6 disks in raid1. I believe the controller is a P400. The HBA is a Emulex connected to a 6030 filer running 7.2.6.1. This particular LUN is connected to a 56 disk aggregate, there are about 140 disks on that filer. I will post some numbers later on.
Suresh
From: Timothy Naple [mailto:tnaple@BERKCOM.com] Sent: Friday, March 19, 2010 9:21 PM To: Suresh Rajagopalan Cc: Toasters List Subject: RE: I/O benchmarking
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you,
Tim
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Suresh Rajagopalan Sent: Friday, March 19, 2010 8:55 PM To: Toasters List Subject: I/O benchmarking
I'm using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I'm kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh
How much memory is in the local host? You might be caching the entire workload in the fs cache on the host. Can you try a larger working set? I usually try to shoot for a data set 3x the memory footprint of the local system. That way you are sure to have flushes to disk.
Of course you should also loom to model your benchmark after your real world workload as much as possible as well. How close is your benchmark too to your real workload?
-Blake
Typed with my thumbs!
On Mar 19, 2010, at 10:25 PM, "Timothy Naple" tnaple@BERKCOM.com wrote:
Suresh,
Some critical information is the model of the disks in both the filer and server as well as the cache in the server’s RAID controlle r which I can lookup if you confirm the model. If you want to forwa rd me an autosupport from the filer that would answer a ton of quest ions. Is the server’s FC HBA connected via a switch to the filer or directly to a target port on the filer? Which driver are you using on the Emulex in the server and which model is it? Any multipathin g? I can take these offline if you don’t want to cc the list with a ll this info and then just report back when we figure this out.
Thank you,
Tim
From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com] Sent: Friday, March 19, 2010 9:57 PM To: Timothy Naple Cc: Toasters List Subject: RE: I/O benchmarking
I ran the test with default variables (100Mb file 5 tests, -- sequential, 512k random and 4k random). Tests were done on a DL785g6 with 6 disks in raid1. I believe the controller is a P400. The HBA is a Emulex connected to a 6030 filer running 7.2.6.1. This particular LUN is connected to a 56 disk aggregate, there are about 140 disks on that filer. I will post some numbers later on.
Suresh
From: Timothy Naple [mailto:tnaple@BERKCOM.com] Sent: Friday, March 19, 2010 9:21 PM To: Suresh Rajagopalan Cc: Toasters List Subject: RE: I/O benchmarking
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you,
Tim
From: owner-toasters@mathworks.com [mailto:owner- toasters@mathworks.com] On Behalf Of Suresh Rajagopalan Sent: Friday, March 19, 2010 8:55 PM To: Toasters List Subject: I/O benchmarking
I’m using the free tool Crystaldiskmark to do some I/O comparison b etween local disk and our filers. On at least one system (SAN conne cted), the local disk (6 disks in RAID1) consistently comes out ahea d in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I’m kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh
64GB. But both disks are local – ie filer disk is also a NTFS formatted LUN on the local host. So fs cache should affect a local SAS disk as well as a disk connected via the HBA
From: Blake Golliher [mailto:thelastman@gmail.com] Sent: Friday, March 19, 2010 11:35 PM To: Timothy Naple Cc: Suresh Rajagopalan; Toasters List Subject: Re: I/O benchmarking
How much memory is in the local host? You might be caching the entire workload in the fs cache on the host. Can you try a larger working set? I usually try to shoot for a data set 3x the memory footprint of the local system. That way you are sure to have flushes to disk.
Of course you should also loom to model your benchmark after your real world workload as much as possible as well. How close is your benchmark too to your real workload?
-Blake
Typed with my thumbs!
On Mar 19, 2010, at 10:25 PM, "Timothy Naple" tnaple@BERKCOM.com wrote:
Suresh,
Some critical information is the model of the disks in both the filer and server as well as the cache in the server’s RAID controller which I can lookup if you confirm the model. If you want to forward me an autosupport from the filer that would answer a ton of questions. Is the server’s FC HBA connected via a switch to the filer or directly to a target port on the filer? Which driver are you using on the Emulex in the server and which model is it? Any multipathing? I can take these offline if you don’t want to cc the list with all this info and then just report back when we figure this out.
Thank you,
Tim
From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com] Sent: Friday, March 19, 2010 9:57 PM To: Timothy Naple Cc: Toasters List Subject: RE: I/O benchmarking
I ran the test with default variables (100Mb file 5 tests, -- sequential, 512k random and 4k random). Tests were done on a DL785g6 with 6 disks in raid1. I believe the controller is a P400. The HBA is a Emulex connected to a 6030 filer running 7.2.6.1. This particular LUN is connected to a 56 disk aggregate, there are about 140 disks on that filer. I will post some numbers later on.
Suresh
From: Timothy Naple [mailto:tnaple@BERKCOM.com] Sent: Friday, March 19, 2010 9:21 PM To: Suresh Rajagopalan Cc: Toasters List Subject: RE: I/O benchmarking
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you,
Tim
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Suresh Rajagopalan Sent: Friday, March 19, 2010 8:55 PM To: Toasters List Subject: I/O benchmarking
I’m using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I’m kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh
Well, it depends on if you wanted to benchmark how well your system moves around blocks in memory or how well the storage system can serve your data. If you run sysstat -sx 1 while you are running your test, do you see the filer actually moving any bits on the wire?
Maybe try testing a write heavy workload? It's usually helpful to drop caches in between tests to get around the local cache effects. In linux 2.6.17 and above you can do the following.
To free pagecache: echo 1 > /proc/sys/vm/drop_caches To free dentries and inodes: echo 2 > /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches
Note the filer still has a fair amount of cache that won't be dropped by doing this.
Hope this helps, good luck with your tests, and report back the results? I've never used the crystal disk mark test before. Shanks for sharing!
-Blake
On Sat, Mar 20, 2010 at 7:19 AM, Suresh Rajagopalan SRajagopalan@williamoneil.com wrote:
64GB. But both disks are local – ie filer disk is also a NTFS formatted LUN on the local host. So fs cache should affect a local SAS disk as well as a disk connected via the HBA
From: Blake Golliher [mailto:thelastman@gmail.com] Sent: Friday, March 19, 2010 11:35 PM To: Timothy Naple Cc: Suresh Rajagopalan; Toasters List Subject: Re: I/O benchmarking
How much memory is in the local host? You might be caching the entire workload in the fs cache on the host. Can you try a larger working set? I usually try to shoot for a data set 3x the memory footprint of the local system. That way you are sure to have flushes to disk.
Of course you should also loom to model your benchmark after your real world workload as much as possible as well. How close is your benchmark too to your real workload?
-Blake
Typed with my thumbs!
On Mar 19, 2010, at 10:25 PM, "Timothy Naple" tnaple@BERKCOM.com wrote:
Suresh,
Some critical information is the model of the disks in both the filer and server as well as the cache in the server’s RAID controller which I can lookup if you confirm the model. If you want to forward me an autosupport from the filer that would answer a ton of questions. Is the server’s FC HBA connected via a switch to the filer or directly to a target port on the filer? Which driver are you using on the Emulex in the server and which model is it? Any multipathing? I can take these offline if you don’t want to cc the list with all this info and then just report back when we figure this out.
Thank you,
Tim
From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com] Sent: Friday, March 19, 2010 9:57 PM To: Timothy Naple Cc: Toasters List Subject: RE: I/O benchmarking
I ran the test with default variables (100Mb file 5 tests, -- sequential, 512k random and 4k random). Tests were done on a DL785g6 with 6 disks in raid1. I believe the controller is a P400. The HBA is a Emulex connected to a 6030 filer running 7.2.6.1. This particular LUN is connected to a 56 disk aggregate, there are about 140 disks on that filer. I will post some numbers later on.
Suresh
From: Timothy Naple [mailto:tnaple@BERKCOM.com] Sent: Friday, March 19, 2010 9:21 PM To: Suresh Rajagopalan Cc: Toasters List Subject: RE: I/O benchmarking
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you,
Tim
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Suresh Rajagopalan Sent: Friday, March 19, 2010 8:55 PM To: Toasters List Subject: I/O benchmarking
I’m using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I’m kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh
I don’t know what this CrystalDiskMark offers that others don’t but iozone has proven to be very flexible and has options that force file handle closes so you avoid the cache issue altogether.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
"I cannot undertake to lay my finger on that article of the Constitution which granted a right to Congress of expending, on objects of benevolence, the money of their constituents." -James Madison on the appropriation of $15,000 by Congress to help French refugees
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Blake Golliher Sent: Friday, March 19, 2010 11:35 PM To: Timothy Naple Cc: Suresh Rajagopalan; Toasters List Subject: Re: I/O benchmarking
How much memory is in the local host? You might be caching the entire workload in the fs cache on the host. Can you try a larger working set? I usually try to shoot for a data set 3x the memory footprint of the local system. That way you are sure to have flushes to disk.
Of course you should also loom to model your benchmark after your real world workload as much as possible as well. How close is your benchmark too to your real workload?
-Blake
Typed with my thumbs!
On Mar 19, 2010, at 10:25 PM, "Timothy Naple" <tnaple@BERKCOM.commailto:tnaple@BERKCOM.com> wrote: Suresh,
Some critical information is the model of the disks in both the filer and server as well as the cache in the server’s RAID controller which I can lookup if you confirm the model. If you want to forward me an autosupport from the filer that would answer a ton of questions. Is the server’s FC HBA connected via a switch to the filer or directly to a target port on the filer? Which driver are you using on the Emulex in the server and which model is it? Any multipathing? I can take these offline if you don’t want to cc the list with all this info and then just report back when we figure this out.
Thank you, Tim
From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com] Sent: Friday, March 19, 2010 9:57 PM To: Timothy Naple Cc: Toasters List Subject: RE: I/O benchmarking
I ran the test with default variables (100Mb file 5 tests, -- sequential, 512k random and 4k random). Tests were done on a DL785g6 with 6 disks in raid1. I believe the controller is a P400. The HBA is a Emulex connected to a 6030 filer running 7.2.6.1. This particular LUN is connected to a 56 disk aggregate, there are about 140 disks on that filer. I will post some numbers later on.
Suresh
From: Timothy Naple [mailto:tnaple@BERKCOM.com] Sent: Friday, March 19, 2010 9:21 PM To: Suresh Rajagopalan Cc: Toasters List Subject: RE: I/O benchmarking
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you, Tim
From: owner-toasters@mathworks.commailto:owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Suresh Rajagopalan Sent: Friday, March 19, 2010 8:55 PM To: Toasters List Subject: I/O benchmarking
I’m using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I’m kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks Suresh
I don't know about CrystalDiskMark as well, but I can definitely agree with Jeff on IOZone. IOZone is VERY useful and configurable. I've used it in the past to benchmark our WAN latencies for remote filers, local disk, as well as intranet filer performance with cache flushes and file handle drops. Output is nice and easily graphable too...
On Sat, Mar 20, 2010 at 6:33 PM, Kennedy, Jeffrey jkennedy@qualcomm.comwrote:
I don’t know what this CrystalDiskMark offers that others don’t but iozone has proven to be very flexible and has options that force file handle closes so you avoid the cache issue altogether.
Jeff Kennedy
Qualcomm, Incorporated
QCT Engineering Compute
858-651-6592
"I cannot undertake to lay my finger on that article of the Constitution
which granted a right to Congress of expending, on objects of benevolence,
the money of their constituents."
-*James Madison on the appropriation of $15,000 by Congress to help French refugees*
*From:* owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] *On Behalf Of *Blake Golliher *Sent:* Friday, March 19, 2010 11:35 PM *To:* Timothy Naple
*Cc:* Suresh Rajagopalan; Toasters List *Subject:* Re: I/O benchmarking
How much memory is in the local host? You might be caching the entire workload in the fs cache on the host. Can you try a larger working set? I usually try to shoot for a data set 3x the memory footprint of the local system. That way you are sure to have flushes to disk.
Of course you should also loom to model your benchmark after your real world workload as much as possible as well. How close is your benchmark too to your real workload?
-Blake
Typed with my thumbs!
On Mar 19, 2010, at 10:25 PM, "Timothy Naple" tnaple@BERKCOM.com wrote:
Suresh,
Some critical information is the model of the disks in both the filer and server as well as the cache in the server’s RAID controller which I can lookup if you confirm the model. If you want to forward me an autosupport from the filer that would answer a ton of questions. Is the server’s FC HBA connected via a switch to the filer or directly to a target port on the filer? Which driver are you using on the Emulex in the server and which model is it? Any multipathing? I can take these offline if you don’t want to cc the list with all this info and then just report back when we figure this out.
Thank you,
Tim
*From:* Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com] *Sent:* Friday, March 19, 2010 9:57 PM *To:* Timothy Naple *Cc:* Toasters List *Subject:* RE: I/O benchmarking
I ran the test with default variables (100Mb file 5 tests, -- sequential, 512k random and 4k random). Tests were done on a DL785g6 with 6 disks in raid1. I believe the controller is a P400. The HBA is a Emulex connected to a 6030 filer running 7.2.6.1. This particular LUN is connected to a 56 disk aggregate, there are about 140 disks on that filer. I will post some numbers later on.
Suresh
*From:* Timothy Naple [mailto:tnaple@BERKCOM.com] *Sent:* Friday, March 19, 2010 9:21 PM *To:* Suresh Rajagopalan *Cc:* Toasters List *Subject:* RE: I/O benchmarking
Suresh,
Performance benchmarking is a science that involves many variables. I am not familiar with CrystalDiskMark but I just downloaded the source for 3.0 RC2 and will have a look to see how applicable it could be to a filer vs local disk comparison. Can you add some more details about your configuration? (any options you run with the test, specs/model of the server including controller/RAID card(s), OS on the server, disk model in the server, disks in the filer, model of the filer, ONTAP rev, etc). A lot of detail is going to be required to make any headway or recommendations for a valid test.
Thank you,
Tim
*From:* owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] *On Behalf Of *Suresh Rajagopalan *Sent:* Friday, March 19, 2010 8:55 PM *To:* Toasters List *Subject:* I/O benchmarking
I’m using the free tool Crystaldiskmark to do some I/O comparison between local disk and our filers. On at least one system (SAN connected), the local disk (6 disks in RAID1) consistently comes out ahead in both read and write. Filer is lightly loaded, and this is on a 56 disk aggregate. I’m kind of stumped on this one, and would like to know if:
a) Are there any other commonly used benchmarks which I can try with the filers?
b) This is on a 2G FC SAN. How much improvement can I expect with 4G or 8G?
Thanks
Suresh