Hi all
I am currently successfully using a fas3210 as 10g NFS storage for a vmware cluster I have in production. I am currently dedicating 2 shelves of 300Gb drives to vmware, and am getting very good response times.
In my test environment, I am using KVM sitting on top of a moosefs cluster (6 machines with 5x 1Tb SATA drives each, and 1Gb network). Disk performance in the VMs is obviously not as good as 10g NFS, but is is definitely acceptable for my database/app servers/etc in test.
I am considering my long term storage strategy (both capacity and speed),and I have two roads to go down in order to scale my storage to be able to handle 100s/1000s of vms. The first is to use netapp with large SATA drives for slow storage, and use high speed SAS drives for fast storage, choosing the storage type according to purpose, or I can use something like Moosefs (or more likely, Ceph) to provide a scalable storage cluster off "commodity" hardware, with SSD caching/journalling, and SATA drives for actual storage.
Does anyone have experience with both traditional SAN storage, as well as Ceph/Gluster/Moosefs and is willing to share any thoughts/ideas they may have?
Regards
Chris
You could also look at ZFS, although it is not distributed; but perhaps ZFS underneath Ceph etc. might work. ZFS has the advantage of being able to directly use SSDs for both read and write caching.
I have played with GlusterFS also, but didn't like it - for best speed you need to run a client, although speed over 1Gbit NFS wasn't horrible. Something about Gluster seems very simplistic in terms of replication strategy.
On Wed, Aug 21, 2013 at 11:06 AM, Chris Picton chris@picton.nom.za wrote:
Hi all
I am currently successfully using a fas3210 as 10g NFS storage for a vmware cluster I have in production. I am currently dedicating 2 shelves of 300Gb drives to vmware, and am getting very good response times.
In my test environment, I am using KVM sitting on top of a moosefs cluster (6 machines with 5x 1Tb SATA drives each, and 1Gb network). Disk performance in the VMs is obviously not as good as 10g NFS, but is is definitely acceptable for my database/app servers/etc in test.
I am considering my long term storage strategy (both capacity and speed),and I have two roads to go down in order to scale my storage to be able to handle 100s/1000s of vms. The first is to use netapp with large SATA drives for slow storage, and use high speed SAS drives for fast storage, choosing the storage type according to purpose, or I can use something like Moosefs (or more likely, Ceph) to provide a scalable storage cluster off "commodity" hardware, with SSD caching/journalling, and SATA drives for actual storage.
Does anyone have experience with both traditional SAN storage, as well as Ceph/Gluster/Moosefs and is willing to share any thoughts/ideas they may have?
Regards
Chris
______________________________**_________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/**mailman/listinfo/toastershttp://www.teaparty.net/mailman/listinfo/toasters
I was considering zfs, for its dedup, compression and caching, under ceph, if that is the direction I take. The zfs/ceph integration is not as tight as btrfs/ceph, though.
Another option is the Trintri type storage applicance (which just seems very much like a ZFS server with SSD and SATA drives).
Chris
On 2013/08/21 7:36 PM, Patrick Giagnocavo wrote:
You could also look at ZFS, although it is not distributed; but perhaps ZFS underneath Ceph etc. might work. ZFS has the advantage of being able to directly use SSDs for both read and write caching.
I have played with GlusterFS also, but didn't like it - for best speed you need to run a client, although speed over 1Gbit NFS wasn't horrible. Something about Gluster seems very simplistic in terms of replication strategy.
On Wed, Aug 21, 2013 at 11:06 AM, Chris Picton <chris@picton.nom.za mailto:chris@picton.nom.za> wrote:
Hi all I am currently successfully using a fas3210 as 10g NFS storage for a vmware cluster I have in production. I am currently dedicating 2 shelves of 300Gb drives to vmware, and am getting very good response times. In my test environment, I am using KVM sitting on top of a moosefs cluster (6 machines with 5x 1Tb SATA drives each, and 1Gb network). Disk performance in the VMs is obviously not as good as 10g NFS, but is is definitely acceptable for my database/app servers/etc in test. I am considering my long term storage strategy (both capacity and speed),and I have two roads to go down in order to scale my storage to be able to handle 100s/1000s of vms. The first is to use netapp with large SATA drives for slow storage, and use high speed SAS drives for fast storage, choosing the storage type according to purpose, or I can use something like Moosefs (or more likely, Ceph) to provide a scalable storage cluster off "commodity" hardware, with SSD caching/journalling, and SATA drives for actual storage. Does anyone have experience with both traditional SAN storage, as well as Ceph/Gluster/Moosefs and is willing to share any thoughts/ideas they may have? Regards Chris _______________________________________________ Toasters mailing list Toasters@teaparty.net <mailto:Toasters@teaparty.net> http://www.teaparty.net/mailman/listinfo/toasters
On Aug 21, 2013, at 1:39 PM, Chris Picton chris@picton.nom.za wrote:
I was considering zfs, for its dedup, compression and caching, under ceph, if that is the direction I take. The zfs/ceph integration is not as tight as btrfs/ceph, though.
Another option is the Trintri type storage applicance (which just seems very much like a ZFS server with SSD and SATA drives).
Tintri VMStore is nothing like ZFS with SSD and SATA drives.
You should contact them, they do demo units and performance is fantastic!
But only NFS right now so if you need other protocols than this might not be the right fit.
On Aug 21, 2013, at 1:39 PM, Chris Picton chris@picton.nom.za wrote:
Tintri VMStore is nothing like ZFS with SSD and SATA drives. You should contact them, they do demo units and performance is fantastic!
Tintri gives good performance where you need it, due the large deduplicated SSD portion. But as an appliance the capacity is limited, 13TB only, and the cost is pretty high! NetApp FlashPool is cheaper option, and scalable. With EMC you could do 3level tiering if that is desired.
sk
You may also want to consider using flash pools with your large SATA drives.
-----Original Message----- From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Chris Picton Sent: Wednesday, August 21, 2013 1:06 PM To: toasters@teaparty.net Subject: SAN vs Software defined storage
Hi all
I am currently successfully using a fas3210 as 10g NFS storage for a vmware cluster I have in production. I am currently dedicating 2 shelves of 300Gb drives to vmware, and am getting very good response times.
In my test environment, I am using KVM sitting on top of a moosefs cluster (6 machines with 5x 1Tb SATA drives each, and 1Gb network). Disk performance in the VMs is obviously not as good as 10g NFS, but is is definitely acceptable for my database/app servers/etc in test.
I am considering my long term storage strategy (both capacity and speed),and I have two roads to go down in order to scale my storage to be able to handle 100s/1000s of vms. The first is to use netapp with large SATA drives for slow storage, and use high speed SAS drives for fast storage, choosing the storage type according to purpose, or I can use something like Moosefs (or more likely, Ceph) to provide a scalable storage cluster off "commodity" hardware, with SSD caching/journalling, and SATA drives for actual storage.
Does anyone have experience with both traditional SAN storage, as well as Ceph/Gluster/Moosefs and is willing to share any thoughts/ideas they may have?
Regards
Chris
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Unfortunately, I have 3210s, which dont support flash pools. I do have flash cache, though, which is providing a nice speed increase.
On 2013/08/21 7:47 PM, Jordan Slingerland wrote:
You may also want to consider using flash pools with your large SATA drives.
-----Original Message----- From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Chris Picton Sent: Wednesday, August 21, 2013 1:06 PM To: toasters@teaparty.net Subject: SAN vs Software defined storage
Hi all
I am currently successfully using a fas3210 as 10g NFS storage for a vmware cluster I have in production. I am currently dedicating 2 shelves of 300Gb drives to vmware, and am getting very good response times.
In my test environment, I am using KVM sitting on top of a moosefs cluster (6 machines with 5x 1Tb SATA drives each, and 1Gb network). Disk performance in the VMs is obviously not as good as 10g NFS, but is is definitely acceptable for my database/app servers/etc in test.
I am considering my long term storage strategy (both capacity and speed),and I have two roads to go down in order to scale my storage to be able to handle 100s/1000s of vms. The first is to use netapp with large SATA drives for slow storage, and use high speed SAS drives for fast storage, choosing the storage type according to purpose, or I can use something like Moosefs (or more likely, Ceph) to provide a scalable storage cluster off "commodity" hardware, with SSD caching/journalling, and SATA drives for actual storage.
Does anyone have experience with both traditional SAN storage, as well as Ceph/Gluster/Moosefs and is willing to share any thoughts/ideas they may have?
Regards
Chris
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters