Diana,
Since this might be of many many reasons, some on filer side, some on client side, I suggest you to open a case.
Eyal.
----------------------------------------------------------------- eTraitel - I'm the new eBuzzword around !!!
Filer Escalation Engineer CNA, MCSE, CSA, LCA, NetApp CA
Network Appliance BV Hoofddorp, The Netherlands Office: +31 23 567 9685 Cellular: +31 6 5497 2568
Get answers NOW! - NetApp On the Web - http://now.netapp.com -----------------------------------------------------------------
-----Original Message----- From: Dianna Mullet [mailto:drmullet@link.com] Sent: Thu, June 07, 2001 17:01 To: toasters@mathworks.com Subject: Sudden CIFS performance degradation
We're running OnTap 5.3.6 on an F760, serving both NFS and CIFS. It ran fine since we upgraded to 5.3.6 around 10 months ago, until early this week, when we began experiencing severely degraded CIFS performance. In some cases, CIFS times out and the client experiences data loss. What should I be looking for?
We had a similar problem with our other filer a few months ago, but never determined the cause; we ended up solving the problem by upgrading to OnTap 6. This time, I would like to determine the cause _before_ we upgrade, but time is of the essence.
Help/tips appreciated.
Thanks, Dianna
"Traitel, Eyal" eyal@netapp.com wrote:
Since this might be of many many reasons, some on filer side, some on client side, I suggest you to open a case.
-----Original Message----- From: Dianna Mullet [mailto:drmullet@link.com] Sent: Thu, June 07, 2001 17:01 To: toasters@mathworks.com Subject: Sudden CIFS performance degradation
We're running OnTap 5.3.6 on an F760, serving both NFS and CIFS. It ran fine since we upgraded to 5.3.6 around 10 months ago, until early this week, when we began experiencing severely degraded CIFS performance. In some cases, CIFS times out and the client experiences data loss. What should I be looking for?
We're having kind of the reverse problem. Since upgrading to OnTap 6.1R1 (in the process of activating the "cluster" fail-over capabilities), we have been seeing 2 (related or not?) problems:
o When an NDMP backup starts, it can spike the CPUs at 100%, causing behaviour similar to what you describe. When we terminate the backups, CPU drops back down to a more normal <50%. Snapshots are well below their 100% usage mark (more like <10%). This was not a problem when we were running 5.3.6R2 prior to the upgrade; but, obviously, much has changed with a major OS release. We've been asked to repeat the problem, and while it is occurring hit the reset switch and type 'sync' and send in the generated core file, but we're loath to do that just in case (granted, only a far outside chance) we end up needing a wack run.
o We occasionally (every 6-7 days) have a filer go "unresponsive". No new CIFS connections, though existing NFS mounts (via UDP) seem to be behaving. When this happens, we get messages on the console like: ... syslogd: Cannot open file /etc/messages: Too many open files in system ... CfTimeDaemon: Can't connect to time server '[...].tamu.edu' ... syslogd: Cannot open file /etc/log/auditlog: Too many open files in system The only supported fix has been to reboot.
We do have open cases regarding those problems, but I thought I would go ahead and ask the following questions of this list:
o Has anyone seen any similar behaviour as either problem? o Is there a command that could be run (or SNMP queriable OID) that would tell me the current (and maximum) open files on the system so I can know when we are getting close to a problem situation?
Thanks for any insight, philip