Right, of course if you misconfig something you open yourself to mischief. What I want to get at is whether it is safe to have a filer attached to the net-at-large, or whether it needs to be firewalled or otherwise protected. Can you elaborate on the nfs insecurities you mention?
NFS is a stateless protocol, which means that there is no active "session". In fact, the server can reboot and NFS clients will only see a delay in response. But they won't lose their mounts and they will continue on as if nothing happened. If NFS depended on a "session" then a server reboot would end all sessions and a client would have to "reconnect". To accomplish this, the NFS server hands out these things called "file handles" to clients. These are like "tickets" that the client presents to get the server to do something. It is possible for a client to "doctor" NFS file handles to gain access to unauthorized portions of a volume. So if you export a subtree of a volume to an untrused client, someone unscrupulous on that client could (with the right software) obtain access to the entire volume, not just the exported subtree, using modified file handles.
You export files to particular hosts. NFS has no way other than IP address to tell which host is which. There is no independent host authentication. So NFS is vulnerable to IP spoofing (but then, so is almost everything else on the internet). IP spoofing is mainly an internal network problem. A machine at another site cannot masquerade as a machine at your site unless all the IP routers between the sites have also been compromised (not likely). But it's trivial for one machine to masquerade as another machine on the same ethernet. So you need to keep intruders off of the machines on your network.
Also, you can telnet to a netapp and do anything that you can do on the system console. All telnet sessions should be across a trusted network to avoid network sniffing. In other words, you don't want to telnet to your netapp from a remote site unless you are SURE no one is sniffing packets in between. Otherwise someone can sniff your netapp's password when you type it in. This is basic security stuff and is true for telnetting anywhere.
What kind of real-world bandwidth can it put out (F740) ? I see the transaction specs, but what about sustained mbits in a web-type environment (3x more reads than writes)?
Netapp prefers to measure "response time" vs. "transactions/sec". In other words, a filer might be showing tremendous throughput for one party, while starving out the others (delayed responses). In the real world, you want average response time to be as fast as possible because that is what people notice. Netapp has very impressive throughput and average response time does not degrade dramatically at high loads.
Excellent, but is the filer capable of overloading 100mbit ethernet? If so, just how much bandwidth should we spec out for it so that it can perform to its max abilities?
We have a F630 with 2 100Mbit interfaces. We weren't able to drive either interface at 100%, but we could drive them both at over 50%. When we pushed the equivalent of 100Mbit/sec to the 630, the cpu load was about 75%. So for our 630, it would take "between one and two" 100mbit interfaces to swamp the cpu.
Of course, there are brief periods when a netapp CPU will go to 100%. This is because certain operarations are very computationally intensive (such as creating or deleting a snapshot). The filer continues serving files just fine, but any ordinarily idle cycles are soaked up by the operation, so the cpu is 0% idle or 100% busy for several seconds. Turning quotas on also pegs the CPU for awhile.
Steve Losen scl@virginia.edu phone: 804-924-0640
University of Virginia ITC Unix Support