Hi toasters,
don't know what experience you have made with the netapp way to deal with the timeouts under linux. We had the problem with SLES9 SP3 systems running on ESX with netapp storage. Every time we had a cluster-takeover on a pair of filers hosting storage with vmwares in it (e.g. during ontap updates) the SLES9 systems did have read-only disks.
I did it the vmware-way and installed a new mpt-scsi driver (mptscsi-gosd-3.02.62-2vmw.i386.rpm) which can be download from vmware. After the installation you only have to reastablish the link of the initrd to the new one (the old one is still present, of course) and reboot the machine.
Works like a charme and seems more smooth than doing the udev-thing.
Best Regards
Jochen
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Pascal Dukers Sent: Saturday, March 22, 2008 11:52 AM To: toasters@mathworks.com Subject: RE: vmware on nfs stability issues
Thank you all for your help. I will share a few answers I received, but have not been posted here:
A new NetApp article from last week on how to set timeouts for the different guest os I have:
https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb37986
Also some tuning of the following parameters on the esx servers can be done:
o NFS.HeartbeatFrequency o NFS.HeartbeatTimeout o NFS.HeartbeatMaxFailures
I have been told that with the default settings the timeout seems to be 30 seconds.