On 10 Jan 00, at 21:04, Thomas Leavitt wrote:
Dear all... we are researching the purchase of a Network Applicance F740, for which one of the primary uses would be service of a commercial RDBMS (either Sybase or Oracle). Until I looked into this last week, I was unaware this was possible and officially supported... I'm very enthusiastic about the possibility - so far, feedback on this has been generally positive. I'm looking for as many people as possible to help validate...
Thomas
Hi THomas,
Last year I purchased 2 F740's for 2 Oracle database servers. While I haven't put them into production yet, I have databases running on them and have done many, many, many imp runs of the db's I need to transfer. Here are my thoughts on this subject:
1) Reliable - so far. They have been rock solid reliable (not that I've had them for real long).
2) Easy to configure. I didn't go to any classes, just read the manual. 2 subjects took some extra work to figure out: qtrees and way snapshot space is allocated.
3) Remotely supportable. I needed disk subsystems that I could put in remote locations and support from a central site. I was greatly disappointed with the remote support capability with every raid system I looked at. The Netapps remote support capabilities, while not perfect is top notch.
4) Fast writes. writes to the nvram is very fast as long as your write rate doesn't overflow the nvram. The nvram is divided into 2 halfs. While one half is accepting writes the other half is being flushed to disk. After the half receiving writes is full, the sides switch. The now full side starts writing to disk and the other side accepts the writes. The problem is if your write load fills up one side before the other side is done being written to disk. At this point you can't get any more write performance out of it. Did I explain this well? It's kind of like Oracle log switches in archive log mode. If Oracle wraps around the logs, but the next log hasn't been archived yet, well, you know, Oracle comes to a halt until the archiving is done.
4) Performance I'm getting? For sequential writes, like when I create an oracle tablespace, I get between 15-20 MB/s. Random writes are around 1.5-2ms from the disk benchmark I used for testing. But I'm not sure of this number. I use AIX systems and the disk benchmark (stkio from stk) works, but IBM does some strange things with NFS that make benchmarking tough - at least it appears that way. From what I can find out, NFS writes aren't supposed to be buffered on the clinet, but it sure looks like AIX is doing something like buffering on the NFS client. I'm still looking into this. The sequential write rates are not as good as some raid systems. I had a HD raid system here that did sequential writes at 35-40MB/s, but it took HD people a full month to get it to the place I could test with it - fast, but completely unsupportable! This is an area where Netapp needs to do more work in order to sell netapps as db disk subsystems. They need to start running/publishing standard disk benchmarks - treating the netapp as a disk subsystem instead as a NFS server. See the next comment below for more . . . .
5) Be carefull in interpreting NetApps benchmarks. THey are NFS benchmarks, NOT disk I/O benchmarks. Check carefull what's in the NFS benchmarks - It's some type of simulated load of a development environment with some percentage mix of reads, writes, catalog access, etc. This in NO WAY describes a db load. I'm not faulting NetApp here, there very clear on their web site on what the benchmarks are and what they represent. I just found it difficult translating NFS ops/s into MB/s or disk i/o's per second. Again, they need to start running/publishing "disk" benchmarks to better sell a netapp as a db disk subsystem. I'd like to see them publish some info like Performance Computing puts in their mag when they test a raid system.
6) There is extra load on your db server due to NFS - this can't be avoided. In general though, other when I'm streaming 20MB/s, the NFS load isn't too bad. At 20MB/s streaming, my RS/6k-F50 (4 processor, 1gb) is running 25-30% in the NFS client (BIOD's). During more normal processing it's barely noticable.
7) The one problem I do have, is that during a high sequential write load, like creating an Oracle tablespace, the high sequential write rate locks out other processes from doing I/o to the netapp. For example, during a create of a 2gb db file, If I (in another window) try and do anything that does I/O, I can wait for 5/10/15 seconds for the command to come back. I'm not sure why this is, but I think this is a problem in AIX, NOT the netapp - because if during the massive sequential writes I try and access the netapp from another computer I get great response. AIX must be queing NFS I/O's or something - which doesn't make sense!
8) I love the way you tune your Oracle db file layout for the netapp - you just don't worry about it. The NetApp automatically sripes things. This makes life real easy.
9) The Netapp makes the normal disk monitoring in unix more difficult. On vmstat/iostat you get wait-for-I/O when the processes are waiting for blocked i/o. At least on AIX, NFS I/O doesn't show up as wait-for-I/O in vmstat or iostat. SOmetimes I see little processor being used and little I/O to the netapp, and I find myself scratching my head wondering what's happening. When I run the same thing on a "normal" disk system I see that it's all wait-for-I/O, in other words, a very heavy random read/write environment.
10) I'm impressed with netapp as a company. They let employees answer to the mailing list - I know of no other company that does this. These people have even admitted to problems and mistakes (this is a strength, not a weakness)! The salesman/techrep I worked with were good. The salesman was a salesman, but not pushy like some I've met. The the tech rep was top notch. They seem to let their product do the selling. I've sat in too many sales meetings (ibm, ems, hds, etc) where they sell you their company and give hype about their products. You know the kind of meeting . . . . you walk out and you really don't know any more about their products than when you went in - lots of words, little real info! I felt that we got straight answers about their products. I think this also seen on their web site. Your can download very detailed info on performance, design, uses, etc. I asked for a copy of their manuals - they gave me a 1 day pass to the paid support area on their web site. I downloaded the manuals and spent a lot of time reading them.
11) I don't like the cost of the netapp. I had prices for raid systems that cost a good bit less than a same sized Netapp. I think this mainly due to che charge for sftw. This is a half knock against netapp. Your paying for a NFS server, but for my use as a db disk subsystem, I don't look at it that way. I needed and purchased a disk subsystem for a database system. For a db disk subsystem, netapp is being compaired against RAID systems, not other NFS servers.
12) Don't broadcast that your running a db on a NFS mounted system unless your willing to take the time and explain everything. It's so AGAINST everything that's taught about db disk subsystem design.
13) A goof I made - AIX supports jumbo packets for gigabit ethernet. I read a old press release from netapp that indicated support for this. It turned out that jumbo packets aren't supported.
14) It still buggs me that you can't edit config files on the netapp through the console. Your have to mount the netapp to a system and edit the files from that system. This makes no sense to me. Even a stupid line editor would be better than nothing!
Final thoughts -
I like my netapps very much. I wish they cost less. I recomment them very highly for db's.
---------------------------------------------------------------------- Richard L. Rhodes e: rhodesr@firstenergycorp.com Ohio Edison Co. p: 330-384-4904 f: 330-384-2514