In our environment we are doing 1-2MB/s write and 2-3MB/s reads, on many small files. 1/3 of our writes were going through symlinks.
When looking at the stats for one of our systems, lookup was the highest on the list of operations.
When we changed our software to avoid the symlink the CPU load dropped about 60%.
We are running 5.3.5 so this may have been fixed in later versions of Ontap.
-------------------------- Sent from my BlackBerry Wireless Handheld (www.BlackBerry.net)
-----Original Message----- From: Eli Bottrell eli.bottrell@corp.terralycos.com To: kevin graham kgraham@dotnetdotcom.org CC: 'toasters@mathworks.com' toasters@mathworks.com Sent: Fri Oct 26 13:13:40 2001 Subject: Re: F720 CPU Maxing out
kevin graham wrote:
I've got a 720 that's becoming very unresponsive under high loads. It
doesn't
get up to 100%, but it does hit 85-90%. This is during times of high
writes to
one volume, on the order of 5-10 Megs/sec.
The unresponsiveness to pings seems odd, but those writes are probably about as much as you'll get out of the 720. Looking at my 760's in the past, they had exhausted cpu at just over 20MB/s of writes (those were full-frame packets over a non-jumbo'ed Gig-II). Having been at a cash-strapped dot com for some time now, I haven't had the luxury of perusing new hardware options much, but does anyone know what kind of performance benefit is picked up w/ ZCS vs. BCS volumes?
Is this a memory bottleneck? I think I only have 256 mb in my filers, should I upgrade for better performance?
Could it be symlinks causing my woes? Any tips for solving this problem?
No matter how many, or how inefficient the symlinks are, they won't affect write performance (unless of course you've got writes competing wiht lookups). Once the file's open, it doesn't matter how the filesystem got there.
The sluggishness is noticed on lookups and read. Mainly because all of our developers have paths on that filer sourced in their profiles in their unix shells (/usr/local/ and home directories for instance) their shells get REALLY REALLY slow all of the sudden. I think I may need to rethink my storage architecture if heavy reads/writes are going to the filers performance this much.
Basically, I guess I should balance it between my 2 filers that the one with the highest traffic is not where the home directories and /usr/local sit.
This is on my older f720, and I just got a f740. Is it much of an upgrade in CPU? Can I just swap heads and have a faster filer?
- Eli
..kg..