As I understand it, simply increasing the maxdirsize doesn't noticeably hurt things as much as stuffing millions of files into a single directory can, which is typically why one raises maxdirsize in the first place. Lots of files in a single directory hurts when you do things like ls -l or the equivlanet where each directory entry causes a lookup of the corresponding inode. WAFL handles this better than many other filesystems out there, but it is possible to pound a filer with these types of ops under these conditions.
-- Adam Fox adamfox@netapp.com
________________________________
From: Jeff Mery [mailto:jeff.mery@ni.com] Sent: Tuesday, May 22, 2007 3:24 PM To: Blake Golliher Cc: Fox, Adam; Magnus Swenson; owner-toasters@mathworks.com; toasters@mathworks.com Subject: Maxdirsize
On a related note to the conversation below - What's the impact to increasing maxdirsize on a given volume? We have a qtree approaching the limit for its volume. Does maxdirsize function like maxfiles/inodes?
Jeff Mery - MCSE, MCP National Instruments
------------------------------------------------------------------------ - "Allow me to extol the virtues of the Net Fairy, and of all the fantastic dorks that make the nice packets go from here to there. Amen." TB - Penny Arcade ------------------------------------------------------------------------ -
"Blake Golliher" thelastman@gmail.com Sent by: owner-toasters@mathworks.com
05/22/2007 11:40 AM
To "Fox, Adam" Adam.Fox@netapp.com cc "Magnus Swenson" magnuss@cadence.com, toasters@mathworks.com Subject Re: running out of inodes problem
Just add more with maxfiles, and ask about netapps plan to adopt dynamic inode allocation. Which there may not be, but one can hope. :) We have a data set that constantly runs out of inodes, we just keep a close eye on it, and add more inodes when needed. We've not had an issue with mysterious loss of performance when adding inodes using maxfiles.
hope that helps,
-Blake
On 5/22/07, Fox, Adam Adam.Fox@netapp.com wrote:
It depends on why you are running out of inodes. If your dataset uses lots of little files, then increasing the disk space probably won't help much because you'll end up having a lot of space sitting idle. If there are just a few places in the data that have lots of inodes,
but
it's the exception not the rule, then adding space will probably do
the
trick.
The only caveat with adding inodes is to add them as you need them. Don't massively over-add inodes as you'll increase some structures in the filesystem that could slow down your performance unecessarily. Also keep in mind that once you increase the inodes in a volume, they cannot be decreased.
Just some thoughts on the topic.
-- Adam Fox adamfox@netapp.com
-----Original Message----- From: Magnus Swenson [mailto:magnuss@cadence.com] Sent: Tuesday, May 22, 2007 10:38 AM To: toasters@mathworks.com Subject: running out of inodes problem
Hello Toasters,
Just wanted to do a quick check, what the standard practise is when running out of inodes on a volume.
I have several flex volumes in one aggregate. One of the volumes only at 80% full ran out of inodes.
df -i will show number of inodes used and inodes free.
This is a 100G volume with 3458831 inodes.
According to now.netapp.com, there are two solutions,
increase inodes with the 'maxfiles' command, or add more disk space to the volume.
Has anybody had experience with this and which way did you go?