On Tue, Mar 28, 2006 at 12:51:46PM -0500, John Stoffel wrote:
Adam> Not a netapp ;) but I am looking for recommendations for a Adam> directly attached raid shelf that uses scsi disks and has a Adam> capacity of around 1 T.
Why SCSI? Raw 1tb or useable 1tb? SCSI is much more expensive per Gb these days (along with FC disks) when compared to SATA/IDE drives.
Speed and Access time are not ATA's forte and for this database storage I (and my developers) want to stick with drives designed for good access time. Capacity is not important as we are only currently using less than 5 gigs.
Adam> We dont need much space at all but I want enough disks to make a Adam> raid5 make sense. 7 or less 72gb disks would be ideal, but have Adam> enough slots for future expansion. I am looking for something Adam> that is COMPLETELY software independant, has its own network Adam> port for sending notices and for configuation (and/or config Adam> through serial). I don't want to fool around with software Adam> needed on the host system to monitor and/or manage it at all.
Why are you so worried about software dependence? If you're talking direct attached storage, getting a JBOD system and running software RAID makes alot of sense.
I actually thought of doing something like that last night and I have enough of a plan to work on now.
Being software independant means I wont care if I want to use Solaris 11, Linux 2.8, FreeBSD 7.1 someday when the vendor may not keep up with support (or drop it completely leaving me high and dry). The software I have seen supplied with disk arrays I have used tends to be excessively GUI, excessively complex, or would not support enough of the operating systems I want to be open to.
In a past job, we had Sun HW RAID boxes and plain A5x00 boxes using VxVM and I never had problems with the VxVM that I couldn't fix. We junked the HW raid boxes as soon as we could. You have zero control over whatever the vendor decides to do to you.
It goes both ways, you can have trouble with both firmware and software. Speaking of the A5x00, we have specifically had hardware trouble with them (disks flaking out, disks dying, disks seizing the array, triple disk failures). We used to use VxVM but our version only supported up to Solaris 8 and we wanted to upgrade. That faces us with paying for an upgrade for a product that continues to be unlike our other software raids, or switching volume managers. I switched volume managers. Then we retired the arrays for major failures as described above and expanded our NetApp :)
I've been fairly comfortable with Solaris Disk Mangler and I would trust my data to it if I wished to add a JBOD array to a Solaris server, but I don't see myself needing to do that anytime soon in the future. I think both VxVM and SDM (or DiskSuite, or whatever sun wants to call it) are managable products if you can put some dedicated experience and some thought into it.
Adam> Other than being SATA, various Nexsan products look appropriate, Adam> but I'm looking for a company that has appropriate products Adam> centering on reliability, price, and decent speed. We have not Adam> had specific component problems in the past, so I'm not sure Adam> dual controllers, cables etc would help, simply isolated or Adam> chronic quirkyness with medium to disasterous effects. I want Adam> something that acts solid and predictable from the moment you Adam> first plug it in.
Adam> I'm not sure I want to dive into iSCSI yet, or if it would work Adam> well in our environment. My concerns with that are client Adam> support (Solaris 9, Linux 2.6, FreeBSD 6.x) and what would Adam> happen in the case of network drops. We don't have a seperate Adam> storage network now.
So are you looking for a single device which can talk to multiple clients? Or are you trying to setup a master device which the other hosts will talk to?
Just want one array per DB server, directly attached to minimize interruption and seperate to eliminate complete failure.
If you just want 1Tb of NFS/CIFS storage, go look at the Buffalo Logic device. It's a standalone NAS box, RAID5, hot spare. 4 x XXXGb disks. Currently they offer 500Gb disks. Put them into a 3x RAID5 with a hot spare and you should be good to go.
I have a Netapp now, and I would consider adding a reliable network path to it for this purpose if it weren't for dire warnings about running Postgres over NFS. I am lightly looking into FCP, but I imagine the licensing would blow away my budget.
But you haven't really said what you're constraints are. Money? Performance? Ability to configure devices? Backups? Interoperability?
Money is somewhat of a constraint as this is a surprise purchasing need, performance is a good aspect for DBs, interoperability, flexibility is always high on my chart, but reliability is always important. I try to buy with best intentions but I am not going to intentionally buy a 4 x 2Gb scsi disk array that only works in Solaris 7 just because lots of people say it is reliable. I am looking for modern and reliable, thus why I asked the list.
I've also used the Fibre Attached Ataboy (Nexsan?) hooked to Suns running RAID5 for around 2.5Tb of storage. Came to $15k back then, alot cheaper now.
These look good, I am looking at Nexsan for a different purpose, but I wish they offered units with SCSI drives.
Oh, big problem with RAID5 is the rebuild time. You lose a disk, it can take upto 24+ hours to rebuild, so make sure you get a UPS for your setup than can hold both the RAID box and the server which manages the filesystem(s) on it.
We do have UPS, but perhaps we've never used raid5 in the same conditions as you, since I have never seen a raid5 rebuild on SCSI/FC disks take more than 2 hours that I can recall. I dont even want to imagine how long a raid5 rebuild would take when we have 500GB disks involved :)
RAID6 is another promising technology (Raid-dp in Netapp terms) where you have dual parity disks instead of single parity disks. More redundancy, but not as bad a RAID0 mirrors.
John