My apologies, I forgot to include a subject for this note
Hi - I have all our Oracle9i databases defined as individual qtrees spread over two filers. Some time ago the decision was to split an individual database into 3 separate components (qtrees), one for the data (tables) the other two qtrees for duplex logs, one on each filer head. A Sun Solaris 8 dataserver that runs 6 Oracle databases then has 18 nfs mounts.
On the recommendation of NetApp the qtrees are mounted with these options: rw,hard,intr,bg,proto=tcp,llock
The DBAs now would like to have this rearranged so that there is a single qtree per dataserver host (think of this as one large blob of space) with ALL of the databases that this server will run as a sub-directory of this qtree.
I'm a bit worried over the resultant nfs performance all going though this single mount point. Can someone comment on the issues of multiple nfs mounts vs. a single mount ? What about locking ?
Thanks, George
I'm a bit worried over the resultant nfs performance all going though this single mount point. Can someone comment on the issues of multiple nfs mounts vs. a single mount ? What about locking ?
I don't think that single vs. multiple mount points per se makes a performance difference. However, if it makes sense to use different NFS options for different applications, then I believe you can still do it. Just mount the same netapp volume on different mount points with different sets of NFS options, and on the NFS client have each app use the appropriate mount point for the NFS options it needs.
mount -o rw,hard,opt2,opt3 toaster:/vol/big /mnt1
mount -o rw,hard,opt4,opt5 toaster:/vol/big /mnt2
...
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
On Tue, Mar 15, 2005 at 03:18:10PM -0500, Steve Losen wrote:
I don't think that single vs. multiple mount points per se makes a performance difference.
This depends on the operating system. Solaris (at least up to Sol8) will assign kernel threads and resource pools per mount point, so a large number of mounts is often faster.
On the other hand, mounting the _same data set_ over different mount points is a loss because you end in caching the same data several times. You need to split your data set over several mountpoints to avoid this effect.