I've been asked to look at whether NetApp is a potential candidate for hosting some Oracle databases, but one thing I don't see is how to get sufficient sustained data rates between the Oracle server and the filer.
People tend to assume that the pipe between Oracle and disks is the bottleneck, but often that's not the case.
Databases do lots of random reads and writes in relatively small chunks -- say 4KB. The average seek-and-read time on a modern disk is still roughly 10 milliseconds (maybe a bit faster, but let's keep the math easy). So that means that a single disk will saturate at one-hundred 4KB transfers, or 400 KB/sec. So even if you had 20 saturated disks, that would only run a single 100bT wire at 80% utilization. But most database environments simply don't drive this sort of load.
Now, I recognize that this is a very back-of-the-envelope-ish analysis, and I certainly don't mean to imply that an Ultra-SCSI attached RAID array will never outperform a network attached filer.
On the other hand, if you aren't doing a VERY careful tuning job, spreading your data over many, many parallel spindles, and using a high end RAID sub-system with lots of NV-RAM to accelerate writes, then you may be pleasantly surprised by a filer's performance.
Many of our customers have been stunned to see a performance improvement when they moved from local disk to network attached storage, because it didn't seem possible based on the relative connection bandwidths. It only makes sense if you probe more deeply into the real bottlenecks involved.
Dave