I am sure the FAS 3050 is a fine solution and yes, FlexVols are cool, but NFS on Linux is nowhere near as bad as you make it out to be.
On current distributions (SuSE SLES9 - which has the better Linux NFS - and RHEL) both UDP and TCP are supported and work just fine. There was some trouble with older 2.4 Linux kernels with TCP, but that's long since fixed. rsize and wsize up to 32768 is also supported on both UDP and TCP, at least on SuSE Linux. And Jumbo frames work just fine on Linux NFS, as some of our benchmarks show (we have now gotten over 2,000 Megabytes (not Megabits) per second over NFS from a single Linux file system, read and write, using iozone). That's using a cluster file system to mount and export the same file system from multiple Linux nodes concurrently. With jumbo frames and two standard GigE ports, you get about 225 MB/sec per node. Need more bandwidth to/from a given file or file system? Just add more nodes.
It is true that the NFSv4 server is not entirely mainstream on Linux yet. The NFSv4 client and server are both being developed at the University of Michigan, Center for Information Technology Integration, with funding from Network Appliance, PolyServe, and IBM. You can track the progress here: http://www.citi.umich.edu/projects/nfsv4/ While v4 might not be a standard part of the enterprise distributions yet, this code runs, it passes interoperability tests with other NFSv4 implementations at the NFSv4 Bakeathons, and you can download it and run it for free on any Linux server with a suitable kernel. It may well be behind the NetApp implementation, but it is not in its infancy and in fact, NetApps has been very generous with both people and money in supporting the development and testing of NFS v4 on Linux.
ckg
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Tim Sent: Friday, October 28, 2005 4:08 PM To: Blake Golliher Cc: ChazzCRH; toasters@mathworks.com Subject: Re: NetApp 3050 vs Dell 6650
From a performance and stability standpoint, as I recall, the RHEL3 NFS
server only supports NFS over UDP reliably and only with a max rsize/wsize of 8192.
A Filer does UDP and TCP(the preferred method) and supports rsize & wsize of 32768, and in some cases 65536 over TCP.
Couple that with JUMBO frames if your entrire infrastructure is GigE and
I would suspect it would outperform hands down. Heck probably without
JUMBO frames it would still be better.
The NFS v4 server on RHEL 3/4 is still in its' infancy (experimental?) compared to NetApp.
The RHEL3/4 NFS client rocks though. It does support NFS v3/4 given your
version of RHEL supports it.
Not to mention the awesome flexibilty of Flexible Volumes which are a breeze to administer....
--tmac
Blake Golliher wrote:
SIO from NetApp is a great tool for this. So is iometer if you wanted something from a non vendor source, but netapp also releases the source to SIO, so it's pretty trustworthy a tool to me. But I do like iozone's excel graph output (you hearing that NetApp?).
SIO has an output of iops, and MB's per second. You can do threading, and differnt block sizes to better simulate the workload your current setup handles.
The first thing you have to do, and I always for get this, is create a file that's the size of, or larger, of the workload you are going to run sio against. In solaris you can just use mkfile, but for linux, I just do a quick dd.
dd if=/dev/zero of=/mnt/netapp/root/test_sio_file bs=1024k count=100
which creates a 100MB file.
Here's an example output, so you see what I'm talking about...
[golliher@admin.lab sio] sudo ./sio_ntap_freebsd 50 100 4k 20m 4 2 /mnt/netapp/root/test_sio_file Version: 3.00
SIO_NTAP: Inputs Read %: 50 Random %: 100 Block Size: 4096 File Size: 20971520 Secs: 4 Threads: 2 File(s): /mnt/netapp/root/test_sio_file Outputs IOPS: 162 KB/s: 647 IOs: 2763 Terminating threads ...[golliher@admin.lab sio]
You can see more examples of how to run it from the man page, and I recommend doing that. As you can see, my run had half reads, half writes of random 4K I/O's to a file I specifed. The test ran for 4 seconds with 2 threads and on only 20MB of the target file.
Hope that helps.... -Blake
ps, the readme is out of date for SIO, the Makefile has full support for freebsd os. Probably a little tweaking, and it'll run on MacOSX...
On 10/28/05, ChazzCRH (sent by Nabble.com) lists@nabble.com wrote:
We are considering replacing our current NFS server which is a Dell
6650
with Quad Xeon MP 2.7 procs, 12GB RAM, 4GB NICS running RHEL3 with a
3050
cluster.
We are running 7.2K SATA drives from Winchester Systems connected via
an
U320 SCSI to the Dell today and we would be running NetApp's 250GB
SATA
drives on the 3050 cluster.
NetApp posts IOPS as a performance metric but I am unable to find
anything
like that related to my Dell configuration so trying to figure out the performance gain and justifying the money we would be saving is
becoming
very tough. I can look at reads and writes per sec using IOSTAT on my
Dell
but I am not 100% sure its apples to apples compared to IOPS.
Can anyone shed some light on this for me?
Thanks !
-C
Sent from the Network Appliance - Toasters forum at Nabble.com.
I am sure the FAS 3050 is a fine solution and yes, FlexVols are cool, but NFS on Linux is nowhere near as bad as you make it out to be.
On current distributions (SuSE SLES9 - which has the better Linux NFS - and RHEL) both UDP and TCP are supported and work just fine. There was some trouble with older 2.4 Linux kernels with TCP, but that's long since fixed. rsize and wsize up to 32768 is also supported on both UDP and TCP, at least on SuSE Linux. And Jumbo frames work just fine on Linux NFS, as some of our benchmarks show (we have now gotten over 2,000 Megabytes (not Megabits) per second over NFS from a single Linux file system, read and write, using iozone). That's using a cluster file system to mount and export the same file system from multiple Linux nodes concurrently.
Could you please specify how many Linux nodes were involved, their hardware specs and the disk configuration used (SAN array, cache, disk size/ speed, use of snapshots active quotas etc.) that were used to generate those benchmarks?
Any particular reason they haven't been submitted to Spec?
Regards, Max