I was going through old (very old) mail, and saw this.
Other people said "no performance tuning" and "easy to manage data".
Let me be more specific.
In many situations people seem to have a collection of systems, each with it's own database, each with it's own disks. This is a pain to manage. Suppose one server has 2 disks that are full, and another server has 3 disks that are half full. Wouldn't it be easier if the storage were managed as a single file system that all of the servers could access? You can put them into separate subdirectories with separate tree quotas, so they can't exceed the space you want them to, but it's very easy to reallocate space. And if you add an extra disk, you can easily allocate it to multiple systems.
There can be a performance benefit as well. Suppose again that you've got several data base servers, each with just a few disks worth of data. The chances are that the database are not evenly balanced. If you put them all on a filer, then WAFL will automatically load balance across the full set of disks.
And finally, snapshots are a big win for backup. Instead of taking the database down for a long time, you take it down for a few seconds, take a snapshot to do backup from, and bring the database back up. Even for databases with "hot backup", you can reduce the amount of time spent in "hot backup" mode, which is good since that often has a performance penalty. Put the DB in hot backup, take a snapshot, turn hot backup off.
Don't get me wrong. I won't claim that using NFS to filers is necessarily the right solution for every single DB application, but many customers have found that it's simpler than what they were doing, and often faster as well.
On Feb 22, Mike Kazar wrote:
Are you running several instances of the database on several hosts, all sharing the same database over NFS?
I don't think OPS (Oracle Parallel Server) is certified yet, although we'd like to get that done.
Today it's separate servers with separate databases that are consolidating their data onto a single filer.
I'm surprised that works reasonably with a log file being hammered on by several different machines (and I don't really understand how it could work at all with several different log files). Also, I would have thought that NFS's loose cache consistency semantics would have prevented sharing a database from working at all, unless use of file locking disables caching appropriately.
As long as you only access the DB files from a single machine, you don't have any problems at all with NFS's caching semantics.
To make something like OPS work, you would need to deal with the caching semantics, but there are ways to turn off caching. I think that locking, or at least some flavors of locking, do the right thing. Can't remember the details at the moment.
Or are you just running one database host, and using the NetApp to manage the database host's disk space. If so, what are the advantages you get from using the NetApp for disk space management instead of just using a local disk?
This is really what I addressed at the beginning.
Dave