No one thoughout recorded history, apart from assorted nuts, has ever believed that the Earth was flat; Eratosthenes calculated both its circumference to within 1% of the true value as well its tilt relative to the plane of the ecliptic no later than 200 B.C. The only real problem was that of terra incognita - i.e., nobody knew exactly where the landmasses were located until a) someone sailed there, and b) accurate chronometers were developed by John Harrison in the 18th Century, enabling navigators to calculate longitude with a high degree of accuracy, and then relay that information to cartographers.
Advances in spherical geometry a la Mercator assisted the latter group, of course.
Historical canards aside, let me restate that I'm very interested in hearing about production experience with NetApp filers and Oracle over NFS. I've a 740 with a Gigabit Ethernet interface, plugged into a Catalyst 5509 doing MPLS, and so would be willing to entertain the notion if someone can give me anything beyond benchmarks.
I know all about snapshots and all that, by the bye. It's -performance- which is the question.
Thanks for the pointer to the link, I'll be sure and check it out.
----------------------------------------------------------- Roland Dobbins rdobbins@netmore.net // 818.535.5024 voice
-----Original Message----- From: Keith Brown [mailto:keith@netapp.com] Sent: Tuesday, August 08, 2000 5:48 PM To: rdobbins@netmore.net; Perry.Jiang@bmo.com Cc: toasters@mathworks.com Subject: Filer storage for databases, seriously? (Was: Re: NetApp questions)
As to running Oracle with the data and logfiles on a filer via NFS, I should think that even with a NetApp using Gigabit Ethernet, you'd take a -huge- performance hit as compared to a local disk array.
Beware conventional wisdom Roland. People used to think the Earth was flat too. :-)
While I wouldn't be so bold as to *guarantee* performance boosts in utilizing the filer storage approach for every database application under the Sun, the simple fact is that filers contain a myriad of features that are very attractive to the database market, and NetApp now draws a significant and growing portion of its revenues from this space.
Snapshots & SnapRestore greatly simplify and enhance database backup and restore environs. The WAFL RAID design puts failure resiliancy into the disk subsystem without forcing you take the performance hits inherent in general purpose RAID-5 designs or going to disk-doubling RAID-1 approaches. SnapMirror gives you database replication to offsite locations for disaster recovery purposes. WAFL's ready expandibility lets you make room for growing databases without disrupting their operation. The list goes on...
Oh.. and yes... performance very often gets a shot in the arm too!
I've no empirical data to back this up, mind you;
Don't worry. Nobody ever does, not even our direct-attach competitors, not that they can be too harshly criticized. Meaningful performance comparisons are tricky to architect, usually have a short shelf life, and customers have an understandable tendency not to believe vendor funded benchmarks anyway (due to the fact that the vendor performing and/or funding the benchmark almost always wins!).
Nevertheless, we did publish a relatively innocuous one some time ago, which can be viewed here:
http://www.netapp.com/tech_library/3044.html
it's just that there's so much overhead associated with NFS even on an optimized platform like the NetApp filer, I can't see it as being a win.
There are certainly some "swings-and-roundabouts"-type things to consider when looking at the two approaches, and some people do conclude that there is more overhead in the network attach approach, dismissing it offhand. However, as far as performance goes, all the theory in the world is no substitute for the practical experience that could be gained by trying a solution for the application you have and actually bearing witness to how well it works and what it performs like.
If there's anyone out there with Oracle experience on filers via NFS, either pro or con, I'd love to hear from you.
I'm hoping there will be some on this list. As I mentioned, beware conventional wisdom. America might have been discovered hundreds of years before Columbus sailed over the horizon, if only all his ancestors hadn't been terrified of falling off the edge of the world!
Keith
Eratosthenes calculated both its
Incidentally, I once had a very bad case of Eratosthenes, but my doctor gave me some ointment that cleared it right up. :-)
Anyway...
Does the Secure Administration option (i.e., ssh) for filers support scp to copy the /etc/passwd file across, if you're using quotas?
Alas not at this time. For reasons I am not entirely familiar with, SecureAdmin 1.0 currently only supports ssh clients. For now anyway....
Keith
No one thoughout recorded history, apart from assorted nuts, has ever believed that the Earth was flat;
This is waaay off-topic, but your statement isn't true. Yes, many learned people suspected the Earth was a sphere of varying sizes for thousands of years. And Columbus didn't have to convice royalty the Earth was round. But the average extremely uneducated and superstitious person of the day really *did* think the Earth was flat.
I know all about snapshots and all that, by the bye. It's -performance- which is the question.
Theoretically, local disk should always be faster, if everything else was equal. But everything else isn't equal... it's certainly possible that the overhead of the local UNIX software and hardware is so great that it offsets the advantage of local disk. It all depends.
Bruce