On 10 Jan 00, at 21:04, Thomas Leavitt wrote:
Dear all... we are researching the purchase of a Network Applicance F740, for which one of the primary uses would be service of a commercial RDBMS (either Sybase or Oracle). Until I looked into this last week, I was unaware this was possible and officially supported... I'm very enthusiastic about the possibility - so far, feedback on this has been generally positive. I'm looking for as many people as possible to help validate...
Thomas
Hi THomas,
Last year I purchased 2 F740's for 2 Oracle database servers. While I haven't put them into production yet, I have databases running on them and have done many, many, many imp runs of the db's I need to transfer. Here are my thoughts on this subject:
1) Reliable - so far. They have been rock solid reliable (not that I've had them for real long).
2) Easy to configure. I didn't go to any classes, just read the manual. 2 subjects took some extra work to figure out: qtrees and way snapshot space is allocated.
3) Remotely supportable. I needed disk subsystems that I could put in remote locations and support from a central site. I was greatly disappointed with the remote support capability with every raid system I looked at. The Netapps remote support capabilities, while not perfect is top notch.
4) Fast writes. writes to the nvram is very fast as long as your write rate doesn't overflow the nvram. The nvram is divided into 2 halfs. While one half is accepting writes the other half is being flushed to disk. After the half receiving writes is full, the sides switch. The now full side starts writing to disk and the other side accepts the writes. The problem is if your write load fills up one side before the other side is done being written to disk. At this point you can't get any more write performance out of it. Did I explain this well? It's kind of like Oracle log switches in archive log mode. If Oracle wraps around the logs, but the next log hasn't been archived yet, well, you know, Oracle comes to a halt until the archiving is done.
4) Performance I'm getting? For sequential writes, like when I create an oracle tablespace, I get between 15-20 MB/s. Random writes are around 1.5-2ms from the disk benchmark I used for testing. But I'm not sure of this number. I use AIX systems and the disk benchmark (stkio from stk) works, but IBM does some strange things with NFS that make benchmarking tough - at least it appears that way. From what I can find out, NFS writes aren't supposed to be buffered on the clinet, but it sure looks like AIX is doing something like buffering on the NFS client. I'm still looking into this. The sequential write rates are not as good as some raid systems. I had a HD raid system here that did sequential writes at 35-40MB/s, but it took HD people a full month to get it to the place I could test with it - fast, but completely unsupportable! This is an area where Netapp needs to do more work in order to sell netapps as db disk subsystems. They need to start running/publishing standard disk benchmarks - treating the netapp as a disk subsystem instead as a NFS server. See the next comment below for more . . . .
5) Be carefull in interpreting NetApps benchmarks. THey are NFS benchmarks, NOT disk I/O benchmarks. Check carefull what's in the NFS benchmarks - It's some type of simulated load of a development environment with some percentage mix of reads, writes, catalog access, etc. This in NO WAY describes a db load. I'm not faulting NetApp here, there very clear on their web site on what the benchmarks are and what they represent. I just found it difficult translating NFS ops/s into MB/s or disk i/o's per second. Again, they need to start running/publishing "disk" benchmarks to better sell a netapp as a db disk subsystem. I'd like to see them publish some info like Performance Computing puts in their mag when they test a raid system.
6) There is extra load on your db server due to NFS - this can't be avoided. In general though, other when I'm streaming 20MB/s, the NFS load isn't too bad. At 20MB/s streaming, my RS/6k-F50 (4 processor, 1gb) is running 25-30% in the NFS client (BIOD's). During more normal processing it's barely noticable.
7) The one problem I do have, is that during a high sequential write load, like creating an Oracle tablespace, the high sequential write rate locks out other processes from doing I/o to the netapp. For example, during a create of a 2gb db file, If I (in another window) try and do anything that does I/O, I can wait for 5/10/15 seconds for the command to come back. I'm not sure why this is, but I think this is a problem in AIX, NOT the netapp - because if during the massive sequential writes I try and access the netapp from another computer I get great response. AIX must be queing NFS I/O's or something - which doesn't make sense!
8) I love the way you tune your Oracle db file layout for the netapp - you just don't worry about it. The NetApp automatically sripes things. This makes life real easy.
9) The Netapp makes the normal disk monitoring in unix more difficult. On vmstat/iostat you get wait-for-I/O when the processes are waiting for blocked i/o. At least on AIX, NFS I/O doesn't show up as wait-for-I/O in vmstat or iostat. SOmetimes I see little processor being used and little I/O to the netapp, and I find myself scratching my head wondering what's happening. When I run the same thing on a "normal" disk system I see that it's all wait-for-I/O, in other words, a very heavy random read/write environment.
10) I'm impressed with netapp as a company. They let employees answer to the mailing list - I know of no other company that does this. These people have even admitted to problems and mistakes (this is a strength, not a weakness)! The salesman/techrep I worked with were good. The salesman was a salesman, but not pushy like some I've met. The the tech rep was top notch. They seem to let their product do the selling. I've sat in too many sales meetings (ibm, ems, hds, etc) where they sell you their company and give hype about their products. You know the kind of meeting . . . . you walk out and you really don't know any more about their products than when you went in - lots of words, little real info! I felt that we got straight answers about their products. I think this also seen on their web site. Your can download very detailed info on performance, design, uses, etc. I asked for a copy of their manuals - they gave me a 1 day pass to the paid support area on their web site. I downloaded the manuals and spent a lot of time reading them.
11) I don't like the cost of the netapp. I had prices for raid systems that cost a good bit less than a same sized Netapp. I think this mainly due to che charge for sftw. This is a half knock against netapp. Your paying for a NFS server, but for my use as a db disk subsystem, I don't look at it that way. I needed and purchased a disk subsystem for a database system. For a db disk subsystem, netapp is being compaired against RAID systems, not other NFS servers.
12) Don't broadcast that your running a db on a NFS mounted system unless your willing to take the time and explain everything. It's so AGAINST everything that's taught about db disk subsystem design.
13) A goof I made - AIX supports jumbo packets for gigabit ethernet. I read a old press release from netapp that indicated support for this. It turned out that jumbo packets aren't supported.
14) It still buggs me that you can't edit config files on the netapp through the console. Your have to mount the netapp to a system and edit the files from that system. This makes no sense to me. Even a stupid line editor would be better than nothing!
Final thoughts -
I like my netapps very much. I wish they cost less. I recomment them very highly for db's.
---------------------------------------------------------------------- Richard L. Rhodes e: rhodesr@firstenergycorp.com Ohio Edison Co. p: 330-384-4904 f: 330-384-2514
On 10 Jan 00, at 21:04, Thomas Leavitt wrote:
- The one problem I do have, is that during a high sequential write
load, like creating an Oracle tablespace, the high sequential write rate locks out other processes from doing I/o to the netapp. For example, during a create of a 2gb db file, If I (in another window) try and do anything that does I/O, I can wait for 5/10/15 seconds for the command to come back. I'm not sure why this is, but I think this is a problem in AIX, NOT the netapp - because if during the massive sequential writes I try and access the netapp from another computer I get great response. AIX must be queing NFS I/O's or something - which doesn't make sense!
Yes, NFS version 3 does client side caching. You could try switching to NFS version 2, but that might hurt your overall performance.
In my experience with AIX, it likes to buffer up a lot of writes in RAM, and when RAM is exhausted, AIX has to flush out those writes to the NFS server. While AIX is busy flushing the buffers, its interactive response can get very sluggish, just as you have seen.
I have seen the following behavior when copying a very large file from a local AIX disk to a netapp.
At first the disk reads on AIX go way up, but the network load stays low. On the netapp, the network, CPU and NFS ops stay low. Then suddenly AIX stops reading the disk and the network load goes way up. This drives up the network, NFS, and CPU load on the netapp. Then AIX goes back to reading the disk and the network load drops to nothing, etc.
It looks like AIX if flip-flopping between
1) reading the disk and buffering NFS writes to RAM, and
2) Flushing the buffered NFS writes to the netapp.
It's not doing both jobs simultaneously. This actually gets more pronounced the more RAM you have. Apparently AIX is more than happy to exhaust ALL of its RAM before commencing to flush the writes. And when you've got 1G or 2G of RAM, that's a lot to flush.
I think there are some tuning parameters that you can fiddle with to get AIX to be less bursty. You can tell AIX to go ahead and start flushing RAM buffers sooner. I don't know if this would help overall throughput or not. I haven't played with this at all, but our resident AIX expert has.
Steve Losen scl@virginia.edu phone: 804-924-0640
University of Virginia ITC Unix Support
"Richard L. Rhodes" wrote:
- Performance I'm getting? For sequential writes, like when I
create an oracle tablespace, I get between 15-20 MB/s. Random writes
NFS v2 and v3 handle writes differently. Which version do you use, do you use UDP or TCP and with what packet sizes?
- The one problem I do have, is that during a high sequential write
load, like creating an Oracle tablespace, the high sequential write rate locks out other processes from doing I/o to the netapp. For example, during a create of a 2gb db file, If I (in another window) try and do anything that does I/O, I can wait for 5/10/15 seconds for the command to come back. I'm not sure why this is, but I think this is a problem in AIX, NOT the netapp - because if during the massive sequential writes I try and access the netapp from another computer I get great response. AIX must be queing NFS I/O's or something - which doesn't make sense!
How many mount points do you use? NFS will block if you run everything through one mount point. bCandid, a Usenet news software vendor, recommends spreading activity across as many mounts as practicable. It does help performance here, though I still have trouble. If you do use just one mount point, try re-working your client configuration to use several mounts.
- It still buggs me that you can't edit config files on the netapp
through the console. Your have to mount the netapp to a system and edit the files from that system. This makes no sense to me. Even a stupid line editor would be better than nothing!
I agree. I'm glad that ONTAP finally (in 5.3.4 and later) has command-line history.
On Tue, 11 Jan 2000, Michael S. Keller wrote:
"Richard L. Rhodes" wrote:
- It still buggs me that you can't edit config files on the netapp
through the console. Your have to mount the netapp to a system and edit the files from that system. This makes no sense to me. Even a stupid line editor would be better than nothing!
But you can! Of course you have to rewrite the entire file, but with a decent terminal emulator with a scroll buffer this isn't too hard. I did ask for ed/ex/vi/edlin, but my request fell on deaf ears. I wonder how hard it would be to use an editor written in Java (I now remember a quote from one of the netapp engineers fearing what will happen when users get a whiff of the fact that NetApps run Java.)
Tom
How exactly do you edit a file on the console rewriting the entire file then?
tkaczma@gryf.net wrote:
On Tue, 11 Jan 2000, Michael S. Keller wrote:
"Richard L. Rhodes" wrote:
- It still buggs me that you can't edit config files on the netapp
through the console. Your have to mount the netapp to a system and edit the files from that system. This makes no sense to me. Even a stupid line editor would be better than nothing!
But you can! Of course you have to rewrite the entire file, but with a decent terminal emulator with a scroll buffer this isn't too hard. I did ask for ed/ex/vi/edlin, but my request fell on deaf ears. I wonder how hard it would be to use an editor written in Java (I now remember a quote from one of the netapp engineers fearing what will happen when users get a whiff of the fact that NetApps run Java.)
Tom
On Wed, 12 Jan 2000, Justin Acklin wrote:
How exactly do you edit a file on the console rewriting the entire file then?
toasty> wrfile /etc/newfile blah blah blah ^C read: error reading standard input: Interrupted system call toasty> rdfile /etc/newfile blah blah blah
?!?@#! rdfile? wrfile??? Why does ? not display these commands?
I tried cpfile but that returned an error.
Is there any way to copy files on the filer from the console? And is it a command you can rsh?
I've been looking for a way around nfs bandwidth problems -- one of the database update steps involves copying 1 to 2 G files from one vol mountpoint to another. This recent discussion has been very interesting but we're already using nfs v3, tcp, 32k packet size at the client (solaris 2.6) end. There may be more than one i/o process trying to use a single mount point; I'm going to run performance tests on that tonight.
But if I can just rsh to the filer and cp a file locally... problem SOLVED.
Jim Davis wrote:
On Wed, 12 Jan 2000, Justin Acklin wrote:
How exactly do you edit a file on the console rewriting the entire file then?
toasty> wrfile /etc/newfile blah blah blah ^C read: error reading standard input: Interrupted system call toasty> rdfile /etc/newfile blah blah blah
?!?@#! rdfile? wrfile??? Why does ? not display these commands?
They're Sooper Sekret commands, that's why!
In other words, they are not tested and there's no guarantee that using them on an active filer won't corrupt your filesystem or cause crashes, especially through rsh.
Bruce
I don't think they are super secret as I don't think you don't have to input the rc_sooper_sekret_handsake command to use them. It more like they're not well documented. Now dd is a sooper sekret kommand.
Tom
On Wed, 12 Jan 2000, Bruce Sterling Woodcock wrote:
?!?@#! rdfile? wrfile??? Why does ? not display these commands?
They're Sooper Sekret commands, that's why!
In other words, they are not tested and there's no guarantee that using them on an active filer won't corrupt your filesystem or cause crashes, especially through rsh.
Bruce
I've been looking for a way around nfs bandwidth problems --
one of the database update steps involves copying 1 to 2 G files from one vol mountpoint to another.
To another mountpoint in the same vol, or another vol? If the former, just mount from the root of the volume and do the mv from there.
Bruce
Bruce,
Currently the copy is between vols.
These are mounted:
# mount -p | grep brs-data wesson:/vol/brs2/argus/brs-data2 - /export/argus/brs-data2 nfs - no smith:/vol/brs1/argus/brs-data1 - /export/argus/brs-data1 nfs - no #
And files like this one get copied from 1 to 2 or vice versa daily:
# ls -l inv0.db -rw-rw-rw- 1 siteadm netsite 439784576 Jan 8 06:50 inv0.db #
We've played with nfs a little bit on this box:
# uname -a ; nfsstat -m | head SunOS load1 5.6 Generic_105181-16 sun4u sparc SUNW,Ultra-4 /export/argus/brs-data2 from wesson:/vol/brs2/argus/brs-data2 Flags: vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5
/export/argus/brs-data1 from smith:/vol/brs1/argus/brs-data1 Flags: vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5
There may be simultaneous multiple copies happening across these two mount points; I don't have any good numbers yet for sustained cp rates.
The developers are asking for gigabit ethernet between the filers and the box that does the copies. Taking them out back and shooting 'em isn't currently an option, but we're looking into it for future releases.
Dave
Bruce Sterling Woodcock wrote:
I've been looking for a way around nfs bandwidth problems --
one of the database update steps involves copying 1 to 2 G files from one vol mountpoint to another.
To another mountpoint in the same vol, or another vol? If the former, just mount from the root of the volume and do the mv from there.
Bruce
The developers are asking for gigabit ethernet between the filers
and the box that does the copies. Taking them out back and shooting 'em isn't currently an option, but we're looking into it for future releases.
Well, yes, I would definitely agree you should be using a gigabit ethernet for your storage "backbone". This will greatly improve your performance.
Other than that, there aren't really any shortcuts. NAS is inherently slower than local disk or SAN for some types of internal databse operations when clients aren't involved. The filer's other advantages generally make this trade-off more than worth it.
Bruce
If it's to different volumes then NDMPCOPY!!! You should be able to find it in the toolies section of NOW.
Tom
On Wed, 12 Jan 2000, Dave Toal wrote:
Bruce,
Currently the copy is between vols. These are mounted:
# mount -p | grep brs-data wesson:/vol/brs2/argus/brs-data2 - /export/argus/brs-data2 nfs - no smith:/vol/brs1/argus/brs-data1 - /export/argus/brs-data1 nfs - no #
And files like this one get copied from 1 to 2 or vice versa daily:
# ls -l inv0.db -rw-rw-rw- 1 siteadm netsite 439784576 Jan 8 06:50 inv0.db #
We've played with nfs a little bit on this box:
# uname -a ; nfsstat -m | head SunOS load1 5.6 Generic_105181-16 sun4u sparc SUNW,Ultra-4 /export/argus/brs-data2 from wesson:/vol/brs2/argus/brs-data2 Flags: vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5
/export/argus/brs-data1 from smith:/vol/brs1/argus/brs-data1 Flags: vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5
There may be simultaneous multiple copies happening across these two mount points; I don't have
any good numbers yet for sustained cp rates.
The developers are asking for gigabit ethernet between the filers and the box that does the
copies. Taking them out back and shooting 'em isn't currently an option, but we're looking into it for future releases.
Dave
Bruce Sterling Woodcock wrote:
I've been looking for a way around nfs bandwidth problems --
one of the database update steps involves copying 1 to 2 G files from one vol mountpoint to another.
To another mountpoint in the same vol, or another vol? If the former, just mount from the root of the volume and do the mv from there.
Bruce
If you have to copy a whole tree i recommend using ndmpcopy. It instructs the nmdp server of the filer to copy the data to another ndmp server. The nice thing is that the destination server can be on the same machine. So your data doesn't go over the network. It's fast and reliable (AFAIK). You can find ndmpcopy on http://now.netapp.com at the tool section (but only if you have a javascript-able browser ;-) )
Oliver
-----Original Message----- From: owner-dl-toasters@netapp.com [mailto:owner-dl-toasters@netapp.com]On Behalf Of Dave Toal Sent: Donnerstag, 13. Januar 2000 00:13 To: toasters@mathworks.com Subject: Re: NetApp and RDBMS (Oracle/Sybase)
?!?@#! rdfile? wrfile??? Why does ? not display these commands?
I tried cpfile but that returned an error.
Is there any way to copy files on the filer from the console? And is it a command you can rsh?
I've been looking for a way around nfs bandwidth problems -- one of the database update steps involves copying 1 to 2 G files from one vol mountpoint to another. This recent discussion has been very interesting but we're already using nfs v3, tcp, 32k packet size at the client (solaris 2.6) end. There may be more than one i/o process trying to use a single mount point; I'm going to run performance tests on that tonight.
But if I can just rsh to the filer and cp a file locally... problem SOLVED.
Jim Davis wrote:
On Wed, 12 Jan 2000, Justin Acklin wrote:
How exactly do you edit a file on the console rewriting the entire file
then?
toasty> wrfile /etc/newfile blah blah blah ^C read: error reading standard input: Interrupted system call toasty> rdfile /etc/newfile blah blah blah
rdfile <filename> to write it out to the screen
wrfile <filename> to write to the file (Ctrl-C to end) it will give you a "bogus" error message at the end
Tom
On Wed, 12 Jan 2000, Justin Acklin wrote:
How exactly do you edit a file on the console rewriting the entire file then?
tkaczma@gryf.net wrote:
On Tue, 11 Jan 2000, Michael S. Keller wrote:
"Richard L. Rhodes" wrote:
- It still buggs me that you can't edit config files on the netapp
through the console. Your have to mount the netapp to a system and edit the files from that system. This makes no sense to me. Even a stupid line editor would be better than nothing!
But you can! Of course you have to rewrite the entire file, but with a decent terminal emulator with a scroll buffer this isn't too hard. I did ask for ed/ex/vi/edlin, but my request fell on deaf ears. I wonder how hard it would be to use an editor written in Java (I now remember a quote from one of the netapp engineers fearing what will happen when users get a whiff of the fact that NetApps run Java.)
Tom