Aggregate is maxed out size of 39 disks Raid DP. I’ve also tried 10k FC drives, and 7.5k SATA drives. Same performance.
> my previous posts mention that I'm running 7.2.1.. now I've
updated to
> 7.2.1.1
Snapshot of thie issue:
User
is running windows 2003.
Throughput ramps up with multiple copies,
just doesn't seem to "fill the
pipe" with a single file copy
operation.
No network errors, TCP and lower level tuning on the
stack hasn't
improved the issue.* (Any Netdiag output to
share?)
No idea if CIFs buffer tuning has been done on the filer,
per NetApp
CIFS troubleshooting instructions.
No info on
Aggregate Layout or volume options (Might be helpful).
No idea if
jumbo frames are part of the mix.
Accurate so
far?
Regards,
Max
>
________________________________
>
> From: Jack Lyons [mailto:jack1729@gmail.com]
> Sent:
Thu 3/29/2007 3:42 AM
> To: Ganjihal, Chetan
> Cc: Shane Garoutte;
Langdon, Laughlin T. (Lock); Glenn Walker;
>
toasters@mathworks.com
> Subject: Re: CIFS overhead with Netapp
Filers
>
>
>
> Make sure that you don't have smb signing
enabled, it was by default in
> 7.0.1 but 7.0.2 later is was disabled by
default.
>
> Ganjihal, Chetan wrote:
>> I think it would be
important to know the state of the system(filer),
>> the values set for
different options, system utilization etc.
>> If there is a perfstat
output with stats gathered during the
>> operation(file copy) being
carried out, it will help understand the
>> problem.
>>
cheers
>> Chetan
>>
------------------------------------------------------------------------
>>
*From:* Shane Garoutte [mailto:sgaroutte@gmail.com]
>>
*Sent:* Thursday, March 29, 2007 12:59 AM
>> *To:* Langdon, Laughlin T.
((Lock))
>> *Cc:* Glenn Walker; toasters@mathworks.com
>>
*Subject:* Re: CIFS overhead with Netapp Filers
>>
>> A quick
crawl on NOW provided the following:
>> http://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs675
>>
>>
if CIFS performance is slow after investigating performance issues,
>>
modify the filer's CIFS negotiated buffer size.
>>
>> 1.
Verify that hardware or software problems do not exist within the
>>
filer, network and client.
>> 2. Record the CIFS negotiated buffer size
by capturing the output of
>> the filer command:
>> options
cifs.neg_buf_size
>> 3. Enter the following filer commands:
>>
a. cifs terminate
>> b. options cifs.neg_buf_size 16644
>> c.
cifs restart
>> 4. If the buffer size in step 3b does not improve
performance, try the
>> following buffer sizes:
>> a. Use
'17424'.
>> Note:
>> Starting with Data ONTAP 6.0.X, allow the
buffer size to exceed 17424;
>> therefore, upgrade to a release that
fixes bug 33396 only if
>> performance does not improve.
>> b.
Use '33472' for environments mixed with Windows NT and Windows 2000.
>>
c. Use '65340' for Windows 2000 only environments.
>> 5. If performance
remains slow:
>> a. Re-confirm that hardware or software problems do
not exist within
>> the filer, network and client.
>> b.
Restore the original CIFS negotiated buffer size (refer to steps 2
>>
and 3).
>> c. During a performance interruption, capture a packet trace
between
>> the filer and Windows client.
>> d. Send the packet
trace to Network Appliance Technical Support for
>>
analysis.
>>
>>
>> On Mar 28, 2007, at 8:33 AM,
Langdon, Laughlin T. ((Lock)) wrote:
>>
>>> I'm doing a
straight drag and drop using UNC paths with a single
>>> 1.5gig zip
file and a 2.2Gig binary File. If I add more streams (aka
>>> start
more than one copy on more than one server the filer happily
>>>
provides more bandwidth)
>>>
>>> From Windows server to
windows server I get 500 Mbps
>>>
>>> From Windows
server to a Netapp 6030 Filer running DOT 7.2.1 I get
>>> about 250
Mbps
>>>
>>> I've tried TCP windows size, Flow Control,
LCAP, Static Link
>>> Aggregation, Singe port on the filer (no vif),
straight crossover
>>> cable.
>>>
>>>
*From:* Glenn Walker [mailto:ggwalker@mindspring.com]
>>>
*Sent:* Tuesday, March 27, 2007 5:15 PM
>>> *To:* Langdon, Laughlin
T. (Lock); toasters@mathworks.com
>>> <mailto:toasters@mathworks.com>
>>>
*Subject:* RE: CIFS overhead with Netapp Filers
>>>
>>>
Typically, you shouldn't see any performance decrease - rather,
you
>>> should get better
performance.
>>>
>>> Are you seeing some sort of
decrease?
>>>
>>> What I can point out: with some things
(Excel\Word to be specific),
>>> MS will implement stuff that's not
really documented for the file
>>> open\discovery which can cause
problems, but I doubt that's what you
>>> are running into given the
speed you are speaking of. Likewise, using
>>> Windows NLB (LB not
HA) doesn't always go very well given the fact
>>> that it's not the
best technology and sometimes can display interop
>>> problems with
other vendors (not just NetApp).
>>>
>>> What exactly
are you doing for your test?
>>>
>>>
Glenn
>>>
>>>
------------------------------------------------------------------------
>>>
>>>
*From:* owner-toasters@mathworks.com
>>> [mailto:owner-toasters@mathworks.com]
*On Behalf Of *Langdon,
>>> Laughlin T. (Lock)
>>>
*Sent:* Tuesday, March 27, 2007 2:33 PM
>>> *To:*
toasters@mathworks.com <mailto:toasters@mathworks.com>
>>>
*Subject:* CIFS overhead with Netapp Filers
>>>
>>> I
was wondering what the CIFS overhead for a NetApp filer would
be.
>>>
>>> Let's say for instance a Windows Server to
Windows Server transfer on
>>> the same switch, same subnet, GIG
copper interconnects, no TOE card,
>>> etc gets me up to about 50%
utilization (500Mbps).
>>>
>>> Should that same server
to a Netapp Filer see a 20-30% degradation in
>>> TX/RX speeds
because of CIFS overhead?
>>>
>>> What should I expect
for data rates in this type of scenario? Are
>>> there any tweaks
anyone knows of to decrease this gap?
>>>
>>> (same
results using static link aggregation, and LACP for the
VIF)
>>>
>>> Thanks
>>>
>>>
Lock
>>>
>>>
>>
>
>
>
____________________________________________________________
This e-mail (including all attachments) is confidential and may be privileged.
It is for the exclusive use of the addressee only. If you are not the addressee,
you are hereby notified that any dissemination of this communication is strictly
prohibited. If you have received this communication in error, please erase all
copies of the message and its attachments and notify us immediately at
help@generalatlantic.com. Thank You.