On Wed, 14 Jan 1998, Alexei Rodriguez wrote:
Hmm. We see this as a huge benefit. I like not having to juggle file system sizes.
Multiple RAID-4 sets mean multiple parity drives (and thus better survivability for multi-disk failures on one Netapp), faster reconstruction times, fault isolation (shelf failures, shelf module failure, etc. affect only their RAID set), and the ability to rip out a bunch of disks without affecting other data.
You mean like 0,0+1,etc? I think you need to read some of the white papers :)
Mirroring would be nice. A double-drive failure is my biggest fear on a large RAID. Actually, that takes second place behind NVRAM failure. =8-{ ;-)
The "limitations" of the NetApp's are (IMHO) what make it so good. Not having to tweak and monitor 500 different parameters is most nice.
You might have a couple more parameters to support, but I think the added flexibility and redundancy (for those who need it and can afford it) are worth it.
At first HA sounds great. The ability to have 1 machine completely die and have another pick up where the first left off (without any percieved interuption of service). The problem shows itself when the cost associated with this level of availability is determined. If you have 99.5% availability, what is the $$$ associated with the additional 0.5%? Then the bean counters take over... :)
99.5% uptime means a 3.6-hour outage every month. That's not so hot. ;-) There are some applications where absolute 100% uptime is the goal. A Netapp still has a number of single points of failure that can cause a service outage: read cache RAM, CPU, NVRAM, shelf, motherboard, network interface, disk controller, etc. Granted, most of these faults cause only very short outages, but some companies want protection against every conceivable failure (or as close to it as technically feasible).