You could also look at ZFS, although it is not distributed; but perhaps ZFS underneath Ceph etc. might work. ZFS has the advantage of being able to directly use SSDs for both read and write caching.
I have played with GlusterFS also, but didn't like it - for best speed you need to run a client, although speed over 1Gbit NFS wasn't horrible. Something about Gluster seems very simplistic in terms of replication strategy.
On Wed, Aug 21, 2013 at 11:06 AM, Chris Picton chris@picton.nom.za wrote:
Hi all
I am currently successfully using a fas3210 as 10g NFS storage for a vmware cluster I have in production. I am currently dedicating 2 shelves of 300Gb drives to vmware, and am getting very good response times.
In my test environment, I am using KVM sitting on top of a moosefs cluster (6 machines with 5x 1Tb SATA drives each, and 1Gb network). Disk performance in the VMs is obviously not as good as 10g NFS, but is is definitely acceptable for my database/app servers/etc in test.
I am considering my long term storage strategy (both capacity and speed),and I have two roads to go down in order to scale my storage to be able to handle 100s/1000s of vms. The first is to use netapp with large SATA drives for slow storage, and use high speed SAS drives for fast storage, choosing the storage type according to purpose, or I can use something like Moosefs (or more likely, Ceph) to provide a scalable storage cluster off "commodity" hardware, with SSD caching/journalling, and SATA drives for actual storage.
Does anyone have experience with both traditional SAN storage, as well as Ceph/Gluster/Moosefs and is willing to share any thoughts/ideas they may have?
Regards
Chris
______________________________**_________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/**mailman/listinfo/toastershttp://www.teaparty.net/mailman/listinfo/toasters