Hi, I've been running the perfstat.sh script from the now site, and in the middle of the script, something called "mbstat" gets run.
Looking at the output from this it seems that our filer is running out of buffers for the primary network interface (e11b in this case).
Is my interpretation correct or is this just normal values?
=================================================================
=-=-=-=-=-= PERF fs001 PRESTATS =-=-=-=-=-= mbstat
System pool small bufs: free 2379 (out of 2400), borrowed 0
mallocs 0, drains 0 (success 0), waits 0, drops 0
/
System pool small bufs: free 2379 (out of 2400), borrowed 0
mallocs 0, drains 0 (success 0), waits 0, drops 0
System pool large bufs: free 7803 (out of 24033), borrowed 16195
mallocs 863400249, drains 0 (success 0), waits 0, drops 0
Private pool e0: free 228 (out of 1764)
mallocs 0, drains 0 (success 0), waits 0, drops 0
Private pool e11a: free 228 (out of 1764)
mallocs 0, drains 0 (success 0), waits 0, drops 0
Private pool e11b: free 0 (out of 1764) <====== looks like we are out of free buffers....
mallocs 0, drains 0 (success 0), waits 0, drops 15107
28526 total mbufs
21087 MT_DATA
7439 MT_FREE
Descriptor mbuf: total 28476, mallocs 1317717, drops 0
Old style mbuf: total 50, mallocs 15074, drops 0
nfs: request 0
ip reassembly: 0 frags, 0 pkts
==================================================================
Could this be why we see strange hangs when accessing an nfsmounted directory for the first time in a while?
And if that is indeed the case, is there anything I can do to improve the performance?
/ Mats