I've got two F740's on order, whose sole task will be to provide storage for a couple of Sun E450 Oracle boxes. I plan to have each filer attached to both E450's via crossover full-duplex 100baseT, plus a third 100baseT on the filer to the rest of the LAN. Is there any reason to go with the QFE board besides port density (which I don't really need?) Are there plans to do FastEtherChannel on it?
The F740 has six available PCI slots, and I will likely not require more than the one on-board FC-AL controller (plus perhaps an additional one for clustering). Three single Fast Ethernet interfaces are cheaper than one QFE, failure of one only downs one interface, the load can be spread over more than one PCI bus (I don't know if this an issue on the Alpha boards) and they are cheaper to replace. OTOH, will the clustering software failover a filer if any one of its NIC's dies, and will that be support on both the single and quad NIC's?
Hi Brian,
Is there any reason to go with the QFE board besides port density (which I
don't
really need?) Are there plans to do FastEtherChannel on it?
As of DataONTAP Release 5.1, we support Cisco Fast EtherChannel (as well as other vendors' port trunking solutions) on our single FE as well as our QFE boards (or both). Please note that on the new F740 and F760 filers we support upto 3 individual 10/100BaseT NIC's *or* 3 QFE NIC's. Therefore, when you include the onboard 10/100BaseT NIC you have a maximum of 4 single Fast Ethernet interfaces or 13 Fast Ethernet interfaces using 3 QFE's plus the onboard interface.
OTOH, will the clustering software failover a filer if any one of its NIC's dies, and will that be support on both the single and quad NIC's?
Unfortunately, one of the things we don't automatically trigger a failover on in our first Clustered Failover release will be a "normal" interface link failure (OTOH, if a NIC fries its PCI slot, we will failover). Using a Cisco Fast EtherChannel across at least two interfaces would be a nice way to protect yourself from a single interface failure on a filer.
Please let me know if I can be of further help, -Val.
============================================== Val Bercovici (613)724-8674 Systems Engineer valb@netapp.com Network Appliance www.netapp.com Ottawa, Canada FAST,SIMPLE,RELIABLE ==============================================
On Fri, 18 Sep 1998, Val Bercovici wrote:
Unfortunately, one of the things we don't automatically trigger a failover on in our first Clustered Failover release will be a "normal" interface link failure (OTOH, if a NIC fries its PCI slot, we will failover).
When that happens, is the entire filer shut down and control transferred to the filer, or is the software able to failover individual interfaces (and, say, not take over the disk shelves)? On a failover situation, since the MAC addresses are migrated, do we need enough physical Ethernet interfaces on one filer to cover for both, or can one port appear with more than one MAC address?
* When that happens, is the entire filer shut down and control * transferred to the filer, or is the software able to failover * individual interfaces (and, say, not take over the disk shelves)? On * a failover situation, since the MAC addresses are migrated, do we need * enough physical Ethernet interfaces on one filer to cover for both, or * can one port appear with more than one MAC address?
During a takeover, the entire workload of the failed filer is transferred over to the remaining filer by having the remaining filer takeover the failed filer's disks and network interfaces. No additional (redundant) Fast Ethernet network interfaces are required for the remaining filer to do this, since we support multiple MAC and IP addresses on our Fast Ethernet interfaces.
One quick note, (I've been politely reminded by our product manager that <g>) our first Clustered Failover release will not support Cisco Fast EtherChannel or other port trunking solutions. So in the short-term you'll have to decide if you want automated network interface failure protection or automated CPU / RAM / PCI failure protection.
-Val. ============================================== Val Bercovici (613)724-8674 Systems Engineer valb@netapp.com Network Appliance www.netapp.com Ottawa, Canada FAST,SIMPLE,RELIABLE ==============================================
One quick note, (I've been politely reminded by our product manager that <g>) our first Clustered Failover release will not support Cisco Fast EtherChannel or other port trunking solutions. So in the short-term you'll have to decide if you want automated network interface failure protection or automated CPU / RAM / PCI failure protection.
"Short-term" == "we're going to implement failover using FEC sooner or later?"
yes/no?
Nick Hilliard Ireland On-Line System Operations
One quick note, (I've been politely reminded by our product manager that <g>) our first Clustered Failover release will not support Cisco Fast EtherChannel or other port trunking solutions. So in the short-term
you'll
have to decide if you want automated network interface failure protection
or
automated CPU / RAM / PCI failure protection.
"Short-term" == "we're going to implement failover using FEC sooner or later?"
yes/no?
Yes. Our goal is to eventually support FEC during a failover. I have no idea what timeframe we are targeting for releasing that functionality.
-Val.
============================================== Val Bercovici (613)724-8674 Systems Engineer valb@netapp.com Network Appliance www.netapp.com Ottawa, Canada FAST,SIMPLE,RELIABLE ==============================================
-----Original Message----- From: Nick Hilliard [mailto:nick@iol.ie] Sent: Monday, September 21, 1998 12:15 PM To: valb@netapp.com Cc: taob@risc.org; toasters@mathworks.com Subject: Re: Quad Fast Ethernet vs. Single Fast Ethernets
One quick note, (I've been politely reminded by our product manager that <g>) our first Clustered Failover release will not support Cisco Fast EtherChannel or other port trunking solutions. So in the short-term
you'll
have to decide if you want automated network interface failure protection
or
automated CPU / RAM / PCI failure protection.
"Short-term" == "we're going to implement failover using FEC sooner or later?"
yes/no?
Nick Hilliard Ireland On-Line System Operations
Unfortunately, one of the things we don't automatically trigger a failover on in our first Clustered Failover release will be a "normal" interface link failure (OTOH, if a NIC fries its PCI slot, we will failover).
When that happens, is the entire filer shut down and control
transferred to the filer,
I assume that by "fries its PCI slot" Val meant "fries its PCI slot so badly that the whole machine dies", in which case the failover occurs *because* the entire filer shut down.
We don't do failover to the cluster partner if one particular component of a machine dies but leaves the machine still running.
Ethercannel support is lurking somewhere I suspect.
Bigger pipe for future?
I suspect one 100TX-FD connection cannot saturate a F740 with most any workload - so to most effectively exploit the E450 you want more than a 100TX-FD between it and the F740.
But, then again, I could be wrong....
beepy
I've got two F740's on order, whose sole task will be to provide
storage for a couple of Sun E450 Oracle boxes. I plan to have each filer attached to both E450's via crossover full-duplex 100baseT, plus a third 100baseT on the filer to the rest of the LAN. Is there any reason to go with the QFE board besides port density (which I don't really need?) Are there plans to do FastEtherChannel on it?
The F740 has six available PCI slots, and I will likely not
require more than the one on-board FC-AL controller (plus perhaps an additional one for clustering). Three single Fast Ethernet interfaces are cheaper than one QFE, failure of one only downs one interface, the load can be spread over more than one PCI bus (I don't know if this an issue on the Alpha boards) and they are cheaper to replace. OTOH, will the clustering software failover a filer if any one of its NIC's dies, and will that be support on both the single and quad NIC's? -- Brian Tao (BT300, taob@risc.org) "Though this be madness, yet there is method in't"
On Fri, 18 Sep 1998, Brian Pawlowski wrote:
I suspect one 100TX-FD connection cannot saturate a F740 with most any workload - so to most effectively exploit the E450 you want more than a 100TX-FD between it and the F740.
We'll have to see what sort of load Oracle on the two E450's can generate to the two F740's. I would think that Oracle will be happy with two full-duplex 100 Mbps links to the Netapps, but it routinely surprises me with the amount of resources it can consume. :-/
I'm told Solaris 2.5.1 and 2.6 will do FastEtherChannel if you have a QFE board plugged into your Sun, but I can only find a single reference to it on Sun's web site, and it's protected by an access password. :-/
On Fri, 18 Sep 1998, Brian Pawlowski wrote:
I suspect one 100TX-FD connection cannot saturate a F740 with most any workload - so to most effectively exploit the E450 you want more than a 100TX-FD between it and the F740.
We'll have to see what sort of load Oracle on the two E450's can
generate to the two F740's. I would think that Oracle will be happy with two full-duplex 100 Mbps links to the Netapps, but it routinely surprises me with the amount of resources it can consume. :-/
I should always be more verbose.
The way I look at two machines attached to one filer with random user activity is that sometimes one machine will be idle and the other working - user workload patterns being bursty (though a large numbver of clients will present an aggregate workload that is smoother?) and it sure would be nice if you could get to all the available bandwidth of the filer from one machine.
That's why I suggested considering Etherchannel or some other way to get beyond one 100TX-FD.
I hate seeing idle filers:-)
I've got two F740's on order, whose sole task will be to provide
storage for a couple of Sun E450 Oracle boxes. I plan to have each filer attached to both E450's via crossover full-duplex 100baseT, plus a third 100baseT on the filer to the rest of the LAN. Is there any reason to go with the QFE board besides port density (which I don't really need?)
At the risk of being beaten up again for trying to be helpful, there might be a small advantage to *not* using the quad card. Back when the F630 was new, I did some NetBench runs with a quad card versus four separate 10/100 NICs. The quad card was in the unbridged PCI slot, but it has an on-board bridge. Only one of the 10/100 cards could be in the unbridged slot, but that did give us one pipe which was not behind a bridge, unlike the quad card. We therefore expected that we might see some advantage with the individual cards.
In fact, we did -- as I recall, the curve for the config with the quad card was about 5% lower than with individual 10/100 cards. I thought this was published in one of the NetBench tech reports, but I don't see it. There has been a lot of tuning on the bridge parameters and other features since then, so I'd expect the effect to be reduced if it's even still measureable. (Those tests were run on the very first production F630 -- I watched 'em hand build parts of it because they weren't really ready yet. We try to make sure our products work correctly when they first ship, with fine-tuning of performance later, rather than going as fast as possible even if it means data loss.)
Before someone asks, putting a quad card in a bridged slot, therefore having two bridges in the way instead of one, didn't seem to compound the slight performance shortfall.
Bottom line: A single 10/100 card in an unbridged probably still has a slight performance advantage over a quad card, but you probably wouldn't be able to tell the difference unless you're trying to drive the system to its limits in a lab environment.
-- Karl