I had a bizarre issue setting up Oracle over NFS.
I had to set some off-color nfs options.
If I recall, the only way I could get ORACLE to setup and finish setup
over NFS was to introduce the "llock" option on *any* NFS related mount
points for oracle.
I am sure I was missing something, since in the past it was never
difficult, but after spend many install/reinstall phases, it was the
only thing I could find that worked and Oracle was not helpful at all.
After the DB was created, i could go back and remove the llock on some
of the mounts, but not the mounts where the actual DB was.
This was the same for Solaris 9 & 10.
Not sure if I was using or if I even tried to use DIRECT_IO.
--tmac
Blake Golliher wrote:
> Typically what I've seen, a 20 disk raidgroup is all you need, and
> just add more raid groups. So two 20 disk raidgroups (I'd use
> raid_dp, cause I think it's awesome) should be great. Aggr's vs.
> FlexVol's is up to you. FlexVols offers some great features you may
> like, so a single 40 disk aggregate over two 20 disk raidgroups sounds
> good.
>
> I'd use more loops. ESH2's offer 6 shelves per loop, and you can hot
> add shelves onto existing loops. So you can start off with two
> shelves on one loop, and one shelve on another loop. With things like
> rapid raid recovery and raid_dp, and the cluster partner head, you
> should be in a pretty good shape for most failures.
>
> How are you going to backup this database? You might need to burn one
> of the fcp ports for your library, or use the scsi port for the tape
> drive. Or are you using something like DataGuard?
>
> I like jumbo frames. They'll help with the cpu load on the db servers
> mostly, from what I've seen, others may have different experience. We
> don't use them internally because of problems we've had in the past.
> YMMV.
>
> Everyone does things differently, let's see how others on the list do this.
>
> -Blake
>
> On 7/20/06, Sto Rage(c) <netbacker(a)gmail.com> wrote:
>> Fellow Toasters,
>> We purchased a new FAS3050c with the intent of using it for Oracle
>> over NFS. This will be our first exposure to Oracle over NFS as we
>> are migrating from a SAN environment (EMC Symm 8430), so we need to
>> ensure we get as good a performance we got from our SAN. The filers
>> would also be serving regular CIFS and NFS data.
>>
>> Here's the configuration, please suggest the best way to lay out the
>> aggregates in order to provide optimum performance for Oracle over
>> NFS.
>> Filer Configuration:
>> - FAS3050c with 6 shelves (3 shelves per head). Shelves are with 300GB
>> 10K FC drives. No additional add-on cards, just with 4 built-in Gig-E
>> and FC ports each.
>> - The 3 shelves are all on a single loop now, would this be a problem?
>> There are 2 FC ports free for future shelf expansion. Should we start
>> off with 2 loops now, 1 loop with 2 shelves and 1 with just 1 shelf?
>> - On each head, we have created a VIF using 2 NICs each, exclusive for
>> Oracle NFS traffic with Jumbo frames enabled and hooked to a dedicated
>> Gigabit switch with no routing outside. One NIC is configured for
>> regular CIFS/NFS traffic and will be on our normal LAN. The 4th NIC is
>> un-configured at present.
>> There will be 4 Solaris 10 boxes running Oracle, 2 production and 2
>> dev, each with 1 NIC dedicated for oracle NFS and jumbo frames
>> enabled. We may add a 2nd NIC to the production hosts and trunk it for
>> Oracle NFS in the future and at that time add the 3rd NIC on the
>> filers to the VIF. Does this network configuration look reasonable.
>> The oracle application will have about 100 to 150 users running ERP,
>> with about 50 to 60 concurrent users. The database size is not large,
>> under 500GB (including dev and production)
>>
>> Now the question on creating the aggregates. We definitely want to get
>> the best performance possible and at the same time don't want to
>> sacrifice too much capacity by creating smaller RAID groups. So this
>> is what I have in mind:
>>
>> On each head, create 2 aggregates with 20 disks each, leaving 2 disks
>> as spare. The aggregates will have a raid size of 20 disks (instead of
>> the default 16), there by maximizing the disks for data (about 4 TB
>> usable per aggregate). This means there is just 1 RAID set per
>> aggregate. Is this OK? What would be the impact when there is a disk
>> failure and rebuilds?
>> Is it better to create smaller raid groups with a raidsize of 10 to
>> minimize rebuild performance?
>> Should we look at creating 1 single aggregate with 40 disks? I know
>> more the spindles the better for an aggregate, how big a difference
>> will a 40 disk aggregate make over a 20 disk one? When does the
>> spindle count max out performace?
>>
>> Hope this is enough information to start a meaningful discussion.
>> Looking forward to all of your valuable inputs. The NetApp white
>> papers don't have these kind of detailed configuration inforamtion.
>>
>> TIA
>> -G
>>
>
>