Shame, that might have answered the problem.
Have you tried just using 1 ethernet port on the filer ? Have
you tried using ports on the same switch, I have seen some problems with
cross stack etherchannels, usually if the stack isn’t connected
correctly.
The following command will give you the details.
UKswitchstack#sh switch stack-ring speed
Stack Ring Speed : 32G
Stack Ring Configuration: Full
Stack Ring Protocol : StackWise
UKswitchstack#
My new filer arrived today, but will try and reproduce on Monday
But just so you know:
After enabling QOS on
cat3750, certain application (mostly bursty and TCP based) may experience
significant performance degradation. This issue is resolved starting Cisco IOS
12.2(25)SEE1.
With the changes in the new
release, the following global config. commands are still required :
------------------------------------------------------------------------------
mls qos queue-set output 1
threshold 2 3200 3200 100 3200
mls qos queue-set output 1
threshold 3 3200 3200 100 3200
------------------------------------------------------------------------------
Cheers
Matt
From: Langdon, Laughlin
T. (Lock) [mailto:Langdon.Lock@mayo.edu]
Sent: 30 March 2007 20:17
To: Davies,Matt; slinkywizard@integra.net
Cc: jack1729@gmail.com; chetan.ganjihal@netapp.com; sgaroutte@gmail.com;
ggwalker@mindspring.com; toasters@mathworks.com
Subject: RE: CIFS overhead with Netapp Filers
sorry, Yes I meant 3750. No QOS.
From: Davies,Matt
[mailto:MDAVIES@generalatlantic.com]
Sent: Fri 3/30/2007 11:47 AM
To: Langdon, Laughlin T. (Lock); slinkywizard@integra.net
Cc: jack1729@gmail.com; chetan.ganjihal@netapp.com; sgaroutte@gmail.com;
ggwalker@mindspring.com; toasters@mathworks.com
Subject: Re: CIFS overhead with Netapp Filers
I presume you
mean the 3750 ?
Is QoS enabled on the switch ?
-----Original Message-----
From: Langdon, Laughlin T. (Lock) <Langdon.Lock@mayo.edu>
To: Davies,Matt; slinkywizard@integra.net <slinkywizard@integra.net>
CC: Jack Lyons <jack1729@gmail.com>; Ganjihal, Chetan
<chetan.ganjihal@netapp.com>; Shane Garoutte <sgaroutte@gmail.com>;
Glenn Walker <ggwalker@mindspring.com>; toasters@mathworks.com
<toasters@mathworks.com>
Sent: Fri Mar 30 19:37:47 2007
Subject: RE: CIFS overhead with Netapp Filers
2 Cisco 3950s cross connected in the back. Should be configed for LACP, 3
ports on one switch 3 ports on the other switch.
________________________________
From: Davies,Matt [mailto:MDAVIES@generalatlantic.com]
Sent: Thu 3/29/2007 1:34 PM
To: Langdon, Laughlin T. (Lock); slinkywizard@integra.net
Cc: Jack Lyons; Ganjihal, Chetan; Shane Garoutte; Glenn Walker; toasters@mathworks.com
Subject: RE: CIFS overhead with Netapp Filers
May be a stupid question, but what network switch are you using ?
Cheers
Matt
________________________________
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Langdon, Laughlin T. (Lock)
Sent: 29 March 2007 21:00
To: slinkywizard@integra.net
Cc: Jack Lyons; Ganjihal, Chetan; Shane Garoutte; Glenn Walker;
toasters@mathworks.com
Subject: RE: CIFS overhead with Netapp Filers
yes, that all looks true
Aggregate is maxed out size of 39 disks Raid DP. I’ve also
tried 10k FC drives, and 7.5k SATA drives. Same performance.
no jumbo frames.
No cifs tuning.
can anyone else try this? take a large file and copy it from a windows
server to a windows server. then copy the sam file from the same server
to a netapp filer. you can use the nul trigger to really isolate network usage
(doesn;t actually write the file) What Mbps do you get on each xfer?
I get approx 500 mbps to a windows server, and 250 mbps to the filer.
same file, same tcp window size, same server.. only delta is once to server
once to filer.
If you want to you can also do a read from the filer and the server to get the
read performance.
________________________________
From: slinkymax0r [mailto:slinkywizard@integra.net]
Sent: Thu 3/29/2007 10:48 AM
To: Langdon, Laughlin T. (Lock)
Cc: Jack Lyons; Ganjihal, Chetan; Shane Garoutte; Glenn Walker; toasters@mathworks.com
Subject: RE: CIFS overhead with Netapp Filers
> my previous posts mention that I'm running 7.2.1.. now I've updated to
> 7.2.1.1
Snapshot of thie issue:
User is running windows 2003.
Throughput ramps up with multiple copies, just doesn't seem to
"fill the
pipe" with a single file copy operation.
No network errors, TCP and lower level tuning on the stack hasn't
improved the issue.* (Any Netdiag output to share?)
No idea if CIFs buffer tuning has been done on the filer, per NetApp
CIFS troubleshooting instructions.
No info on Aggregate Layout or volume options (Might be helpful).
No idea if jumbo frames are part of the mix.
Accurate so far?
Regards,
Max
> ________________________________
>
> From: Jack Lyons [mailto:jack1729@gmail.com]
> Sent: Thu 3/29/2007 3:42 AM
> To: Ganjihal, Chetan
> Cc: Shane Garoutte; Langdon, Laughlin T. (Lock); Glenn Walker;
> toasters@mathworks.com
> Subject: Re: CIFS overhead with Netapp Filers
>
>
>
> Make sure that you don't have smb signing enabled, it was by default in
> 7.0.1 but 7.0.2 later is was disabled by default.
>
> Ganjihal, Chetan wrote:
>> I think it would be important to know the state of the system(filer),
>> the values set for different options, system utilization etc.
>> If there is a perfstat output with stats gathered during the
>> operation(file copy) being carried out, it will help understand the
>> problem.
>> cheers
>> Chetan
>> ------------------------------------------------------------------------
>> *From:* Shane Garoutte [mailto:sgaroutte@gmail.com]
>> *Sent:* Thursday, March 29, 2007 12:59 AM
>> *To:* Langdon, Laughlin T. ((Lock))
>> *Cc:* Glenn Walker; toasters@mathworks.com
>> *Subject:* Re: CIFS overhead with Netapp Filers
>>
>> A quick crawl on NOW provided the following:
>> http://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs675
>>
>> if CIFS performance is slow after investigating performance issues,
>> modify the filer's CIFS negotiated buffer size.
>>
>> 1. Verify that hardware or software problems do not exist within the
>> filer, network and client.
>> 2. Record the CIFS negotiated buffer size by capturing the output of
>> the filer command:
>> options cifs.neg_buf_size
>> 3. Enter the following filer commands:
>> a. cifs terminate
>> b. options cifs.neg_buf_size 16644
>> c. cifs restart
>> 4. If the buffer size in step 3b does not improve performance, try the
>> following buffer sizes:
>> a. Use '17424'.
>> Note:
>> Starting with Data ONTAP 6.0.X, allow the buffer size to exceed 17424;
>> therefore, upgrade to a release that fixes bug 33396 only if
>> performance does not improve.
>> b. Use '33472' for environments mixed with Windows NT and Windows
2000.
>> c. Use '65340' for Windows 2000 only environments.
>> 5. If performance remains slow:
>> a. Re-confirm that hardware or software problems do not exist within
>> the filer, network and client.
>> b. Restore the original CIFS negotiated buffer size (refer to steps 2
>> and 3).
>> c. During a performance interruption, capture a packet trace between
>> the filer and Windows client.
>> d. Send the packet trace to Network Appliance Technical Support for
>> analysis.
>>
>>
>> On Mar 28, 2007, at 8:33 AM, Langdon, Laughlin T. ((Lock)) wrote:
>>
>>> I'm doing a straight drag and drop using UNC paths with a single
>>> 1.5gig zip file and a 2.2Gig binary File. If I add more streams
(aka
>>> start more than one copy on more than one server the filer happily
>>> provides more bandwidth)
>>>
>>> From Windows server to windows server I get 500 Mbps
>>>
>>> From Windows server to a Netapp 6030 Filer running DOT 7.2.1 I get
>>> about 250 Mbps
>>>
>>> I've tried TCP windows size, Flow Control, LCAP, Static Link
>>> Aggregation, Singe port on the filer (no vif), straight crossover
>>> cable.
>>>
>>> *From:* Glenn Walker [mailto:ggwalker@mindspring.com]
>>> *Sent:* Tuesday, March 27, 2007 5:15 PM
>>> *To:* Langdon, Laughlin T. (Lock); toasters@mathworks.com
>>> <mailto:toasters@mathworks.com>
>>> *Subject:* RE: CIFS overhead with Netapp Filers
>>>
>>> Typically, you shouldn't see any performance decrease - rather,
you
>>> should get better performance.
>>>
>>> Are you seeing some sort of decrease?
>>>
>>> What I can point out: with some things (Excel\Word to be
specific),
>>> MS will implement stuff that's not really documented for the file
>>> open\discovery which can cause problems, but I doubt that's what
you
>>> are running into given the speed you are speaking of. Likewise,
using
>>> Windows NLB (LB not HA) doesn't always go very well given the fact
>>> that it's not the best technology and sometimes can display
interop
>>> problems with other vendors (not just NetApp).
>>>
>>> What exactly are you doing for your test?
>>>
>>> Glenn
>>>
>>> ------------------------------------------------------------------------
>>>
>>> *From:* owner-toasters@mathworks.com
>>> [mailto:owner-toasters@mathworks.com]
*On Behalf Of *Langdon,
>>> Laughlin T. (Lock)
>>> *Sent:* Tuesday, March 27, 2007 2:33 PM
>>> *To:* toasters@mathworks.com <mailto:toasters@mathworks.com>
>>> *Subject:* CIFS overhead with Netapp Filers
>>>
>>> I was wondering what the CIFS overhead for a NetApp filer would
be.
>>>
>>> Let's say for instance a Windows Server to Windows Server transfer
on
>>> the same switch, same subnet, GIG copper interconnects, no TOE
card,
>>> etc gets me up to about 50% utilization (500Mbps).
>>>
>>> Should that same server to a Netapp Filer see a 20-30% degradation
in
>>> TX/RX speeds because of CIFS overhead?
>>>
>>> What should I expect for data rates in this type of scenario? Are
>>> there any tweaks anyone knows of to decrease this gap?
>>>
>>> (same results using static link aggregation, and LACP for the VIF)
>>>
>>> Thanks
>>>
>>> Lock
>>>
>>>
>>
>
>
>
____________________________________________________________
This e-mail (including all attachments) is confidential and may be
privileged.
It is for the exclusive use of the addressee only. If you are not the
addressee,
you are hereby notified that any dissemination of this communication is
strictly
prohibited. If you have received this communication in error, please erase all
copies of the message and its attachments and notify us immediately at
help@generalatlantic.com. Thank You.