For those of you that are doing LACP/cross-stack Etherchannel in your ESX environments, like the example in section 6.8 on page 40 of TR-3428, what switches are you using?
If you are doing LACP across Cisco 6500's, what SUP's are you using?
Thanks,
--Carl
Sup720-10g Is the only supervisor that supports vss. Which enables you to virtualize two 6509s into a single logical switch. This feature "vss" enables cross chassis etherchannel.
Sent from my iPhone
On Oct 25, 2008, at 11:20 AM, "Carl Howell" chowell@uwf.edu wrote:
For those of you that are doing LACP/cross-stack Etherchannel in your ESX environments, like the example in section 6.8 on page 40 of TR-3428, what switches are you using?
If you are doing LACP across Cisco 6500's, what SUP's are you using?
Thanks,
--Carl
Hi everybody Here are some notes and links I've collected...
Switch Vendors and Products that support Link Aggregation Across the Stack This is not an exhaustive list of either vendors or products. This list does not imply any kind of testing or support by NetApp.
3Com 3Com calls it DLA - Distributed Link Aggregation. It is only supported on their higher-end stackable switches (5500 and 5500G) as of July 2008.
White Paper - XRN and Clustered Stacking Note the parts about XRN and DLA. Note compatibility table at the bottom of page 4. http://www.3com.com/other/pdfs/legacy/en_US/3com_503183.pdf
Cisco CrossStack EtherChannel is supported on the Catalyst 3750 series. http://www.cisco.com/en/US/products/ps7077/index.html
Data Sheet http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps5023/produc t_data_sheet0900aecd80371991.html
White Paper - Cisco StackWise and StackWise Plus Technology http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps5023/prod_w hite_paper09186a00801b096a.pdf
Some newer blade modules such as the 3120 for HP have a similar technology. http://www.cisco.com/en/US/products/ps8749/index.html
Integrating the Cisco Catalyst Blade Switch 3120 for HP c-Class Blade Enclosure into the Cisco Data Center Network Architecture Design Guide http://www.cisco.com/en/US/prod/collateral/switches/ps6746/ps8742/ps8749 /white_paper_c07-448865.pdf
Cisco Blade modules http://www.cisco.com/en/US/products/ps6746/Products_Sub_Category_Home.ht ml Cheat sheet: CSM = Catalyst Switch Module 3012 / 3110 --> IBM 3020 / 3120 --> HP 3030 / 3032 / 3130 --> Dell 3040 --> FSC (Fujitsu Siemens)
VBS = Virtual Blade Switch
For chassis-based switches, like Catalyst 6500 series, they sometimes call it "multichassis EtherChannel" Cisco Catalyst 6500 Series Virtual Switching System (VSS) 1440 White paper http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps9336/prod_w hite_paper0900aecd806ee2ed_ps2797_Products_White_Paper.html
D-Link Cross Stack Port Trunking DGS-3600 Series Data Sheet ftp://ftp10.dlink.com/pdfs/products/DGS-3650/DGS-3650_ds.pdf DGS-3650 Product Page http://www.dlink.com/products/?pid=640
Nortel Split MultiLink Trunking (SMLT) or Distributed Multi-Link Trunking (DMLT) http://en.wikipedia.org/wiki/SMLT http://en.wikipedia.org/wiki/DMLT
ERS2500 Position Paper - Stackable vs. modular wiring closet solutions See Multi-Link Trunking p.4 http://www.nortel.com/products/01/passport/lan/collateral/nn108321.pdf
Product Brief - Ethernet Routing Switch 5500 Series See MLT and DMLT on p.2. http://www.nortel.com/products/02/bstk/switches/collateral/nn119200.pdf
Once again, NetApp has not tested all of these, nor do we endorse or support any particular vendor.
Basically, if the cross stack link aggregation makes it look like one switch, and bonded links act like either static or dynamic LACP, then a multimode VIF should work across switches.
As a related note, the lack of cross stack link aggregation until recently is the reason NetApp invented second level VIFs in the first place. It was the only way to get more than two links of throughput, but even then, we could only use the links to one switch at a time.
We welcome any feedback or experience you have with any models or combinations of this technology.
Share and enjoy!
Peter
-----Original Message----- From: Carl Howell [mailto:chowell@uwf.edu] Sent: Saturday, October 25, 2008 7:48 AM To: toasters@mathworks.com Subject: TR-3428 & Cross-stack Etherchannel
For those of you that are doing LACP/cross-stack Etherchannel in your ESX environments, like the example in section 6.8 on page 40 of TR-3428, what switches are you using?
If you are doing LACP across Cisco 6500's, what SUP's are you using?
Thanks,
--Carl
Does NetApp still publish the number of appliances running different version of Data OnTap?
We just upgraded to 7.2.6 on Sunday and we have had two (and possibly a third) panics causing failover on 1 of the 4 filers. Failover went smoothly for the most part (I bet the app team that didn't want to spend the money multipathing software / hba's will be singing another tune tomorrow) but both panics didn't create complete core dumps. We are working on the case with NetApp support but want to see what other people were seeing about 7.2.6.
7.2.6 went GD recently and I assume it was GA for awhile before that, but I can't figure release dates or the number of appliances running the different versions of Data OnTap.
Thanks Jack
p.s. why did we upgrade? http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=281669 is the bug we are hitting...in addition to: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=253517 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=256975 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=271864
At NOW site (Service/support main page), choose Release Advisor / Comparison, but choose the link for "comparison", not "advisor".
Navigate your way through, and put 7.2.6 in the "other" field, and choose any which other Ontap. Comparison type is "metrics".
Or, http://now.netapp.com/NOW/cgi-bin/relcmp.on?rrel=7.3&rels=7.2.6&what... notfirst=+Go%21
Best regards, ~~~~~~~~~~~~~~~~ Kevin Parker Mobile: 919.606.8737 http://theparkerz.com ~~~~~~~~~~~~~~~~
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Jack Lyons Sent: Thursday, October 30, 2008 7:18 PM To: toasters@mathworks.com Subject: Data OnTap 7.2.6
Does NetApp still publish the number of appliances running different version of Data OnTap?
We just upgraded to 7.2.6 on Sunday and we have had two (and possibly a third) panics causing failover on 1 of the 4 filers. Failover went smoothly for the most part (I bet the app team that didn't want to spend the money multipathing software / hba's will be singing another tune tomorrow) but both panics didn't create complete core dumps. We are working on the case with NetApp support but want to see what other people were seeing about 7.2.6.
7.2.6 went GD recently and I assume it was GA for awhile before that, but I can't figure release dates or the number of appliances running the different versions of Data OnTap.
Thanks Jack
p.s. why did we upgrade? http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=281669 is the bug we are hitting...in addition to: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=253517 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=256975 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=271864
I tried doing both and it only gives me stats for Base 7.2 and (aug 2006) any other ideas?
Kevin M. Parker wrote:
At NOW site (Service/support main page), choose Release Advisor / Comparison, but choose the link for "comparison", not "advisor".
Navigate your way through, and put 7.2.6 in the "other" field, and choose any which other Ontap. Comparison type is "metrics".
Or, http://now.netapp.com/NOW/cgi-bin/relcmp.on?rrel=7.3&rels=7.2.6&what... notfirst=+Go%21
Best regards,
Kevin Parker Mobile: 919.606.8737 http://theparkerz.com
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Jack Lyons Sent: Thursday, October 30, 2008 7:18 PM To: toasters@mathworks.com Subject: Data OnTap 7.2.6
Does NetApp still publish the number of appliances running different version of Data OnTap?
We just upgraded to 7.2.6 on Sunday and we have had two (and possibly a third) panics causing failover on 1 of the 4 filers. Failover went smoothly for the most part (I bet the app team that didn't want to spend the money multipathing software / hba's will be singing another tune tomorrow) but both panics didn't create complete core dumps. We are working on the case with NetApp support but want to see what other people were seeing about 7.2.6.
7.2.6 went GD recently and I assume it was GA for awhile before that, but I can't figure release dates or the number of appliances running the different versions of Data OnTap.
Thanks Jack
p.s. why did we upgrade? http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=281669 is the bug we are hitting...in addition to: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=253517 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=256975 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=271864
We upgraded a pair of clustered 3070's and a 3020 last Saturday. We do a small amount of CIFS, more NFS and lots of LUNs. We haven't had any issues. All of the hosts connected to the clustered 3070's are multipathed except for one.
good luck with the issues, Kevin
At 07:17 PM 10/30/2008, Jack Lyons wrote:
Does NetApp still publish the number of appliances running different version of Data OnTap?
We just upgraded to 7.2.6 on Sunday and we have had two (and possibly a third) panics causing failover on 1 of the 4 filers. Failover went smoothly for the most part (I bet the app team that didn't want to spend the money multipathing software / hba's will be singing another tune tomorrow) but both panics didn't create complete core dumps. We are working on the case with NetApp support but want to see what other people were seeing about 7.2.6.
7.2.6 went GD recently and I assume it was GA for awhile before that, but I can't figure release dates or the number of appliances running the different versions of Data OnTap.
Thanks Jack
p.s. why did we upgrade? http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=281669 is the bug we are hitting...in addition to: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=253517 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=256975 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=271864
We upgraded a 2020 as a guinea pig and so far it¹s been fine no issues at all. We use NFS and FlexCache.
We¹re considering 7.2.6 for the same reasons as you. Presumably, the ASIS code has been changed in this release to address bug you mentioned. I wonder if anyone else is using 7.2.6 and ASIS and whether it¹s possibly related to the panic?
Darren
On 31/10/2008 01:03, "Kevin Sheen" sheenk@zbzoom.net wrote:
We upgraded a pair of clustered 3070's and a 3020 last Saturday. We do a small amount of CIFS, more NFS and lots of LUNs. We haven't had any issues. All of the hosts connected to the clustered 3070's are multipathed except for one.
good luck with the issues, Kevin
At 07:17 PM 10/30/2008, Jack Lyons wrote:
Does NetApp still publish the number of appliances running different version of Data OnTap?
We just upgraded to 7.2.6 on Sunday and we have had two (and possibly a third) panics causing failover on 1 of the 4 filers. Failover went smoothly for the most part (I bet the app team that didn't want to spend the money multipathing software / hba's will be singing another tune tomorrow) but both panics didn't create complete core dumps. We are working on the case with NetApp support but want to see what other people were seeing about 7.2.6.
7.2.6 went GD recently and I assume it was GA for awhile before that, but I can't figure release dates or the number of appliances running the different versions of Data OnTap.
Thanks Jack
p.s. why did we upgrade? http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=281669 is the bug we are hitting...in addition to: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=253517 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=256975 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=271864
To report this email as spam click here https://www.mailcontrol.com/sr/J7ew1XN5gqXTndxI!oX7Uo1HG+Q+q+PZw8DTP9HDy9Cwtz b4uoDzMy+03U!lvwbVKOttfeYeA3yzpFBU8KFm0g== .
Upgraded a 2050A from 7.2.4L1 to 7.2.6. Running VMWare on NFS, FCP, and iSCSI. CIFS for file shares. So far no problems whatsoever. Had to upgrade the BMC firmware in the system before doing the D.O.T. upgrade, but no panics or failovers have happened since the upgrade.
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Darren Sykes Sent: Friday, October 31, 2008 2:54 AM To: Kevin Sheen; Jack Lyons; toasters@mathworks.com Subject: Re: Data OnTap 7.2.6
We upgraded a 2020 as a guinea pig and so far it's been fine - no issues at all. We use NFS and FlexCache.
We're considering 7.2.6 for the same reasons as you. Presumably, the ASIS code has been changed in this release to address bug you mentioned.
I wonder if anyone else is using 7.2.6 and ASIS and whether it's possibly related to the panic?
Darren
On 31/10/2008 01:03, "Kevin Sheen" sheenk@zbzoom.net wrote:
We upgraded a pair of clustered 3070's and a 3020 last Saturday. We do a small amount of CIFS, more NFS and lots of LUNs. We haven't had any issues. All of the hosts connected to the clustered 3070's are multipathed except for one.
good luck with the issues, Kevin
At 07:17 PM 10/30/2008, Jack Lyons wrote:
Does NetApp still publish the number of appliances running different version of Data OnTap?
We just upgraded to 7.2.6 on Sunday and we have had two (and possibly a third) panics causing failover on 1 of the 4 filers. Failover went smoothly for the most part (I bet the app team that didn't want to spend the money multipathing software / hba's will be singing another tune tomorrow) but both panics didn't create complete core dumps. We are working on the case with NetApp support but want to see what other people were seeing about 7.2.6.
7.2.6 went GD recently and I assume it was GA for awhile before that, but I can't figure release dates or the number of appliances running the different versions of Data OnTap.
Thanks Jack
p.s. why did we upgrade? http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=281669 is the bug we are hitting...in addition to: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=253517 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=256975 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=271864
To report this email as spam click here https://www.mailcontrol.com/sr/J7ew1XN5gqXTndxI!oX7Uo1HG+Q+q+PZw8DTP9HD y9Cwtzb4uoDzMy+03U!lvwbVKOttfeYeA3yzpFBU8KFm0g== .
A few ndmp backup to tape issues on our snapvault. Other than that 7.2.6 is working well.
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Chris Muellner Sent: Friday, October 31, 2008 6:48 AM To: Darren Sykes; Kevin Sheen; Jack Lyons; toasters@mathworks.com Subject: RE: Data OnTap 7.2.6
Upgraded a 2050A from 7.2.4L1 to 7.2.6. Running VMWare on NFS, FCP, and iSCSI. CIFS for file shares. So far no problems whatsoever. Had to upgrade the BMC firmware in the system before doing the D.O.T. upgrade, but no panics or failovers have happened since the upgrade.
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Darren Sykes Sent: Friday, October 31, 2008 2:54 AM To: Kevin Sheen; Jack Lyons; toasters@mathworks.com Subject: Re: Data OnTap 7.2.6
We upgraded a 2020 as a guinea pig and so far it's been fine - no issues at all. We use NFS and FlexCache.
We're considering 7.2.6 for the same reasons as you. Presumably, the ASIS code has been changed in this release to address bug you mentioned.
I wonder if anyone else is using 7.2.6 and ASIS and whether it's possibly related to the panic?
Darren
On 31/10/2008 01:03, "Kevin Sheen" sheenk@zbzoom.net wrote:
We upgraded a pair of clustered 3070's and a 3020 last Saturday. We do a small amount of CIFS, more NFS and lots of LUNs. We haven't had any issues. All of the hosts connected to the clustered 3070's are multipathed except for one.
good luck with the issues, Kevin
At 07:17 PM 10/30/2008, Jack Lyons wrote:
Does NetApp still publish the number of appliances running different version of Data OnTap?
We just upgraded to 7.2.6 on Sunday and we have had two (and possibly a third) panics causing failover on 1 of the 4 filers. Failover went smoothly for the most part (I bet the app team that didn't want to spend the money multipathing software / hba's will be singing another tune tomorrow) but both panics didn't create complete core dumps. We are working on the case with NetApp support but want to see what other people were seeing about 7.2.6.
7.2.6 went GD recently and I assume it was GA for awhile before that, but I can't figure release dates or the number of appliances running the different versions of Data OnTap.
Thanks Jack
p.s. why did we upgrade? http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=281669 is the bug we are hitting...in addition to: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=253517 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=256975 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=271864
To report this email as spam click here https://www.mailcontrol.com/sr/J7ew1XN5gqXTndxI!oX7Uo1HG+Q+q+PZw8DTP9HD y9Cwtzb4uoDzMy+03U!lvwbVKOttfeYeA3yzpFBU8KFm0g== .
I am seeing brief outages where my VMs (NFS as the back end protocol) and SQL LUNs (FC) both complain of poor disk response time at the same time. I don't think it can be the infrastructure since one is IP and the other FC. The LUNs are on a different set of spindles/different aggr then the NFS volumes as well, so I don't think it's a disk bottleneck. I'm on a 3070 and rarely do we hit 3500 IOPS (and 90+% of that out of cache) or go above 40% for the busiest CPU (normally we're in the 15-25% range) so I am not sure what's going on here, any suggestions on how to troubleshoot it?
We're running 7.2.4, I want to wait for 7.3.1 to upgrade since we are using NFS for VMware and there are several fixes that will be beneficial to us.
Please be advised that this email may contain confidential information. If you are not the intended recipient, please do not read, copy or re-transmit this email. If you have received this email in error, please notify us by email by replying to the sender and by telephone (call us collect at +1 202-828-0850) and delete this message and any attachments. Thank you in advance for your cooperation and assistance.
In addition, Danaher and its subsidiaries disclaim that the content of this email constitutes an offer to enter into, or the acceptance of, any contract or agreement or any amendment thereto; provided that the foregoing disclaimer does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment to this email.
I assume nothing is going on at that point in time (mass snapshot deletion etc) and the time it slows doesn't follow any pattern?
________________________________
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Page, Jeremy Sent: 03 November 2008 16:02 To: toasters@mathworks.com Subject: Brief outages on the filer?
I am seeing brief outages where my VMs (NFS as the back end protocol) and SQL LUNs (FC) both complain of poor disk response time at the same time. I don't think it can be the infrastructure since one is IP and the other FC. The LUNs are on a different set of spindles/different aggr then the NFS volumes as well, so I don't think it's a disk bottleneck. I'm on a 3070 and rarely do we hit 3500 IOPS (and 90+% of that out of cache) or go above 40% for the busiest CPU (normally we're in the 15-25% range) so I am not sure what's going on here, any suggestions on how to troubleshoot it?
We're running 7.2.4, I want to wait for 7.3.1 to upgrade since we are using NFS for VMware and there are several fixes that will be beneficial to us.
To report this email as spam click here https://www.mailcontrol.com/sr/WF8w5skV9crTndxI!oX7UtxGqPKNw+GDdtdp6IYh VsUiWwT!9oJ0z0WK7nyITuN9W8TNmRMQjZBTHufDF4Ashw== .
Please be advised that this email may contain confidential information. If you are not the intended recipient, please do not read, copy or re-transmit this email. If you have received this email in error, please notify us by email by replying to the sender and by telephone (call us collect at +1 202-828-0850) and delete this message and any attachments. Thank you in advance for your cooperation and assistance.
In addition, Danaher and its subsidiaries disclaim that the content of this email constitutes an offer to enter into, or the acceptance of, any contract or agreement or any amendment thereto; provided that the foregoing disclaimer does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment to this email.
Any way you can predict when it will happen? Sysstat (or better yet, perfstat) would be of help here.
Something I've noticed on my infrastructure: VMWare over NFS (unsure about other protocols) will have huge spikes where they write lots of data in a quick burst - happens only a few times a day on relatively quiet systems, but I can definitely see a spike on the filer. Perhaps you have the same thing going, just a SWAG...
The impact on our side is not really felt - but the filer does go into back2back CPs from the massive spike (200MB/s - 350MB/s in a short window) and that could manifest itself as 'poor disk response time'.
In our case, we're running VMWare over NFS and Exchange over iSCSI on the same filers, but no one is really complaining when the 'events' happen. Just something I've noticed for a while.
FAS6070 and the busy time is recorded around 6000 NFS IOPS. That said, we did a stress test with about 25 guests running IOMeter and were able to push 15000 NFS OPS on node 1, 10000 NFS OPS on node 2 (a combined 400MB/s write, 300MB/s read) without any sort of reported performance problems.
________________________________
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Page, Jeremy Sent: Monday, November 03, 2008 11:02 AM To: toasters@mathworks.com Subject: Brief outages on the filer?
I am seeing brief outages where my VMs (NFS as the back end protocol) and SQL LUNs (FC) both complain of poor disk response time at the same time. I don't think it can be the infrastructure since one is IP and the other FC. The LUNs are on a different set of spindles/different aggr then the NFS volumes as well, so I don't think it's a disk bottleneck. I'm on a 3070 and rarely do we hit 3500 IOPS (and 90+% of that out of cache) or go above 40% for the busiest CPU (normally we're in the 15-25% range) so I am not sure what's going on here, any suggestions on how to troubleshoot it?
We're running 7.2.4, I want to wait for 7.3.1 to upgrade since we are using NFS for VMware and there are several fixes that will be beneficial to us.
Please be advised that this email may contain confidential information. If you are not the intended recipient, please do not read, copy or re-transmit this email. If you have received this email in error, please notify us by email by replying to the sender and by telephone (call us collect at +1 202-828-0850) and delete this message and any attachments. Thank you in advance for your cooperation and assistance.
In addition, Danaher and its subsidiaries disclaim that the content of this email constitutes an offer to enter into, or the acceptance of, any contract or agreement or any amendment thereto; provided that the foregoing disclaimer does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment to this email.
On Mon, Nov 3, 2008 at 12:02 PM, Glenn Walker ggwalker@mindspring.com wrote:
FAS6070 and the busy time is recorded around 6000 NFS IOPS. That said, we did a stress test with about 25 guests running IOMeter and were able to push 15000 NFS OPS on node 1, 10000 NFS OPS on node 2 (a combined 400MB/s write, 300MB/s read) without any sort of reported performance problems.
May I know how big your aggregates are in terms of spindle count and disk size. We are trying to size our VMWare infrastructure using NFS on a FAS3070 cluster, with two 40-disk aggregates (300GB FC) each. Like you we are running Exchange on iSCSI on these filers now. On the network side, we have separate 3-port multi-vif for iSCSIand NFS traffic.
TIA
We went with the following config:
Exchange\iSCSI
(3) 1GbE Cu in a MM VIF + (3) 1GbE Cu in a MM VIF = Single Mode VIF (3) 14D+2P Raid Groups per AGGR, 2 AGGRs using 144GB 15K FC drives
VMWare\NFS (2) 10GbE in a MM VIF + (2) 10GbE in a MM VIF = Single Mode VIF (2) 14D+2P Raid Groups per AGGR, 2 AGGRs using 300GB 15K FC drives
Both are under the same Filers, but separate aggregates and loops and interfaces.
So far, so good
-----Original Message----- From: Sto Rage(c) [mailto:netbacker@gmail.com] Sent: Monday, November 03, 2008 7:33 PM To: Glenn Walker Cc: toasters@mathworks.com Subject: Re: Brief outages on the filer?
On Mon, Nov 3, 2008 at 12:02 PM, Glenn Walker ggwalker@mindspring.com wrote:
FAS6070 and the busy time is recorded around 6000 NFS IOPS. That
said, we
did a stress test with about 25 guests running IOMeter and were able
to push
15000 NFS OPS on node 1, 10000 NFS OPS on node 2 (a combined 400MB/s
write,
300MB/s read) without any sort of reported performance problems.
May I know how big your aggregates are in terms of spindle count and disk size. We are trying to size our VMWare infrastructure using NFS on a FAS3070 cluster, with two 40-disk aggregates (300GB FC) each. Like you we are running Exchange on iSCSI on these filers now. On the network side, we have separate 3-port multi-vif for iSCSIand NFS traffic.
TIA
Sorry I've been slow to respond, I am out on vacation this week. We have 7 volumes, 5 NFS for VMware, one CIFS and one for LUNs. These are across two aggr of 33 disks each (DP, 300gig SATA).
200 VMs (NFS), 2 SQL clusters (FC) and home directories for ~1000 users.
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Sto Rage(c) Sent: Monday, November 03, 2008 7:33 PM To: Glenn Walker Cc: toasters@mathworks.com Subject: Re: Brief outages on the filer?
On Mon, Nov 3, 2008 at 12:02 PM, Glenn Walker ggwalker@mindspring.com wrote:
FAS6070 and the busy time is recorded around 6000 NFS IOPS. That
said, we
did a stress test with about 25 guests running IOMeter and were able
to push
15000 NFS OPS on node 1, 10000 NFS OPS on node 2 (a combined 400MB/s
write,
300MB/s read) without any sort of reported performance problems.
May I know how big your aggregates are in terms of spindle count and disk size. We are trying to size our VMWare infrastructure using NFS on a FAS3070 cluster, with two 40-disk aggregates (300GB FC) each. Like you we are running Exchange on iSCSI on these filers now. On the network side, we have separate 3-port multi-vif for iSCSIand NFS traffic.
TIA
Please be advised that this email may contain confidential information. If you are not the intended recipient, please do not read, copy or re-transmit this email. If you have received this email in error, please notify us by email by replying to the sender and by telephone (call us collect at +1 202-828-0850) and delete this message and any attachments. Thank you in advance for your cooperation and assistance.
In addition, Danaher and its subsidiaries disclaim that the content of this email constitutes an offer to enter into, or the acceptance of, any contract or agreement or any amendment thereto; provided that the foregoing disclaimer does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment to this email.