Easy one.
If it went down in half, adjust your kernel tcp slot count.
Sent from my iPhone
On May 19, 2012, at 11:46 AM, Dan Burkland dburklan@NMDP.ORG wrote:
I know dd isn't the best tool since it is a single threaded application and in no way represents the workload that Oracle will impose. However, I thought it would still give me a decent ballpark figure regarding throughput. I tried a block size of 64k, 128k, and 1M (just to see) and got a bit more promising results:
# dd if=/dev/zero of=/mnt/testfile bs=1M count=5120 5120+0 records in 5120+0 records out 5368709120 bytes (5.4 GB) copied, 26.6878 seconds, 201 MB/s
If I run two of these dd sessions at once the throughput figure above gets cut in half (each dd session reports it creates the file at around 100MB/s).
As far as the switch goes, I have not checked it yet however I did notice that flow control is set to full on the 6080 10GbE interfaces. We are also running Jumbo Frames on all of the involved equipment.
As far as the RHEL OS tweaks go, here are the settings that I have changed on the system:
### /etc/sysctl.conf:
# 10GbE Kernel Parameters net.core.rmem_default = 262144 net.core.rmem_max = 16777216 net.core.wmem_default = 262144 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 262144 16777216 net.ipv4.tcp_wmem = 4096 262144 16777216 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_sack = 0 #
###
### /etc/modprobe.d/sunrpc.conf:
options sunrpc tcp_slot_table_entries=128
###
### Mount options for the NetApp test NFS share:
rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sy s
###
Thanks again for all of your quick and detailed responses!
Dan
On 5/19/12 1:08 PM, "Robert McDermott" rmcdermo@fhcrc.org wrote:
Your block size is only 1K; try increasing the block size and the throughput will increase. 1K IOs would generate a lot of IOPs with very little throughput.
-Robert
Sent from my iPhone
On May 19, 2012, at 10:48, Dan Burkland dburklan@NMDP.ORG wrote:
Hi all,
My company just bought some Intel x520 10GbE cards which I recently installed into our Oracle EBS database servers (IBM 3850 X5s running RHEL 5.8). As the "linux guy" I have been tasked with getting these servers to communicate with our NetApp 6080s via NFS over the new 10GbE links. I have got everything working however ever after tuning the RHEL kernel I am only getting 160MB/s writes using the "dd if=/dev/zero of=/mnt/testfile bs=1024 count=5242880" command. For you folks that run 10GbE to your toasters, what write speeds are you seeing from your 10GbE connected servers? Did you have to do any tuning in order to get the best results possible? If so what did you change?
Thanks!
Dan
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters