I'd go with something off of the Nexenta HCL for best results:
http://www.nexenta.com/corp/support/support-overview/hardware-supported-list
Though if you end up using the OpenIndiana / Illumos version, you may want to ping their mailing lists for suggestions. In general Nexenta should be feeding changes back to Illumos, but for all I know it may not quite match up perfectly.
We use Dell hardware, and Nexenta has pretty thorough guides on integration with their gear. For the most part has worked well, but I also would like to have tried one of the more turnkey options like from DataON.
Our setup is such that vdevs (RAID groups) are spread across all of the JBODs in the system. With triple parity RAID, this means we can lose up to three JBODs on the system and still be OK. Keep in mind, this makes expansion challening. For a lower tier system I wouldn't go this route if I felt I'd need to expand my zpool (rather than start a new one) in the near future.
Ray
On Sat, Jan 11, 2014 at 05:20:28PM -0800, Fletcher Cocquyt wrote:
Great discussion - many folks have mentioned ZFS If I want to replicate the feature set (snapmirrror, dedup etc) What hardware is the most robust for this?
thanks
On Jan 10, 2014, at 8:42 PM, Isaiah Weiner zoratu@zoratu.com wrote:
The way ZFS finds places to put data is the same as metadata. If you have a long running history of volume creation, snapshots, clones, &c. with any great quantity (think self-service wrappers for lots of folks), operations like 'zfs list' take progressively longer over time. The only fix is to send the volume data elsewhere, recreate the pool, and recv the data back into it. There have also been bugs related to soft errors on particular types of devices over the last handful of years that would cause an inordinate number of false declarations of failed disks. Many of those bugs have been fixed, but the fundamental flaw of "fragmented" metadata across the pool does not appear to have been fixed in spite of the corporate sponsorship of folks like Joyent and Delphix. We struggled for years (2008 to 2012) with ZFS trying to work around this problem, but eventually gave up and dumped it. For secondary we managed to glue together Coraid for the disk fabric, rack mount servers with Veritas on them for NFS heads, and an Avere cluster in front of that for working dataset performance. Completely unsupported by Symantec but it works fine as long as you don't need I/O fencing (because SCSI3-PR doesn't work with Coraid and the non-PR method doesn't appear to work either; but can't get Symantec to look at it because Coraid isn't on the HCL yet). The 1PB of Coraid ran me $0.25/GB and after the rack mount servers and the Avere cluster, the TCO was $0.52/GB. Not bad for a rack and 7 kWh. Not as reliable as the Infinidat seven 9s figures but pretty okay for a solution that wasn't engineered. On Fri, Jan 10, 2014 at 3:01 PM, Patrick Giagnocavo <xemacs5@gmail.com> wrote: I have found ZFS works well, I like the ability to run the "zfs scrub" command and know for sure, that the filesystem is intact. On Fri, Jan 10, 2014 at 2:55 PM, Fletcher Cocquyt < fcocquyt@stanford.edu> wrote: Netapp is great and always will be 1st tier storage, But for many of our new use cases (SMB customers, IOPS over capacity etc) we are looking for lower cost solutions to compliment (work well with support wise) our huge ONTAP base. What other solutions have others found compliments their Netapp ecosystem ? (NB: we are running multiple 8.1.2 (7 mode) clusters and value the robust snapmirror, vfiler migration featureset thanks