Hello Toasters,
Just wanted to do a quick check, what the standard practise is when running out of inodes on a volume.
I have several flex volumes in one aggregate. One of the volumes only at 80% full ran out of inodes.
df -i will show number of inodes used and inodes free.
This is a 100G volume with 3458831 inodes.
According to now.netapp.com, there are two solutions,
increase inodes with the 'maxfiles' command, or add more disk space to the volume.
Has anybody had experience with this and which way did you go?
It depends on why you are running out of inodes. If your dataset uses lots of little files, then increasing the disk space probably won't help much because you'll end up having a lot of space sitting idle. If there are just a few places in the data that have lots of inodes, but it's the exception not the rule, then adding space will probably do the trick.
The only caveat with adding inodes is to add them as you need them. Don't massively over-add inodes as you'll increase some structures in the filesystem that could slow down your performance unecessarily. Also keep in mind that once you increase the inodes in a volume, they cannot be decreased.
Just some thoughts on the topic.
-- Adam Fox adamfox@netapp.com
-----Original Message----- From: Magnus Swenson [mailto:magnuss@cadence.com] Sent: Tuesday, May 22, 2007 10:38 AM To: toasters@mathworks.com Subject: running out of inodes problem
Hello Toasters,
Just wanted to do a quick check, what the standard practise is when running out of inodes on a volume.
I have several flex volumes in one aggregate. One of the volumes only at 80% full ran out of inodes.
df -i will show number of inodes used and inodes free.
This is a 100G volume with 3458831 inodes.
According to now.netapp.com, there are two solutions,
increase inodes with the 'maxfiles' command, or add more disk space to the volume.
Has anybody had experience with this and which way did you go?
Just add more with maxfiles, and ask about netapps plan to adopt dynamic inode allocation. Which there may not be, but one can hope. :) We have a data set that constantly runs out of inodes, we just keep a close eye on it, and add more inodes when needed. We've not had an issue with mysterious loss of performance when adding inodes using maxfiles.
hope that helps,
-Blake
On 5/22/07, Fox, Adam Adam.Fox@netapp.com wrote:
It depends on why you are running out of inodes. If your dataset uses lots of little files, then increasing the disk space probably won't help much because you'll end up having a lot of space sitting idle. If there are just a few places in the data that have lots of inodes, but it's the exception not the rule, then adding space will probably do the trick.
The only caveat with adding inodes is to add them as you need them. Don't massively over-add inodes as you'll increase some structures in the filesystem that could slow down your performance unecessarily. Also keep in mind that once you increase the inodes in a volume, they cannot be decreased.
Just some thoughts on the topic.
-- Adam Fox adamfox@netapp.com
-----Original Message----- From: Magnus Swenson [mailto:magnuss@cadence.com] Sent: Tuesday, May 22, 2007 10:38 AM To: toasters@mathworks.com Subject: running out of inodes problem
Hello Toasters,
Just wanted to do a quick check, what the standard practise is when running out of inodes on a volume.
I have several flex volumes in one aggregate. One of the volumes only at 80% full ran out of inodes.
df -i will show number of inodes used and inodes free.
This is a 100G volume with 3458831 inodes.
According to now.netapp.com, there are two solutions,
increase inodes with the 'maxfiles' command, or add more disk space to the volume.
Has anybody had experience with this and which way did you go?
On a related note to the conversation below - What's the impact to increasing maxdirsize on a given volume? We have a qtree approaching the limit for its volume. Does maxdirsize function like maxfiles/inodes?
Jeff Mery - MCSE, MCP National Instruments
------------------------------------------------------------------------- "Allow me to extol the virtues of the Net Fairy, and of all the fantastic dorks that make the nice packets go from here to there. Amen." TB - Penny Arcade -------------------------------------------------------------------------
"Blake Golliher" thelastman@gmail.com Sent by: owner-toasters@mathworks.com 05/22/2007 11:40 AM
To "Fox, Adam" Adam.Fox@netapp.com cc "Magnus Swenson" magnuss@cadence.com, toasters@mathworks.com Subject Re: running out of inodes problem
Just add more with maxfiles, and ask about netapps plan to adopt dynamic inode allocation. Which there may not be, but one can hope. :) We have a data set that constantly runs out of inodes, we just keep a close eye on it, and add more inodes when needed. We've not had an issue with mysterious loss of performance when adding inodes using maxfiles.
hope that helps,
-Blake
On 5/22/07, Fox, Adam Adam.Fox@netapp.com wrote:
It depends on why you are running out of inodes. If your dataset uses lots of little files, then increasing the disk space probably won't help much because you'll end up having a lot of space sitting idle. If there are just a few places in the data that have lots of inodes, but it's the exception not the rule, then adding space will probably do the trick.
The only caveat with adding inodes is to add them as you need them. Don't massively over-add inodes as you'll increase some structures in the filesystem that could slow down your performance unecessarily. Also keep in mind that once you increase the inodes in a volume, they cannot be decreased.
Just some thoughts on the topic.
-- Adam Fox adamfox@netapp.com
-----Original Message----- From: Magnus Swenson [mailto:magnuss@cadence.com] Sent: Tuesday, May 22, 2007 10:38 AM To: toasters@mathworks.com Subject: running out of inodes problem
Hello Toasters,
Just wanted to do a quick check, what the standard practise is when running out of inodes on a volume.
I have several flex volumes in one aggregate. One of the volumes only at 80% full ran out of inodes.
df -i will show number of inodes used and inodes free.
This is a 100G volume with 3458831 inodes.
According to now.netapp.com, there are two solutions,
increase inodes with the 'maxfiles' command, or add more disk space to the volume.
Has anybody had experience with this and which way did you go?
As I understand it, simply increasing the maxdirsize doesn't noticeably hurt things as much as stuffing millions of files into a single directory can, which is typically why one raises maxdirsize in the first place. Lots of files in a single directory hurts when you do things like ls -l or the equivlanet where each directory entry causes a lookup of the corresponding inode. WAFL handles this better than many other filesystems out there, but it is possible to pound a filer with these types of ops under these conditions.
-- Adam Fox adamfox@netapp.com
________________________________
From: Jeff Mery [mailto:jeff.mery@ni.com] Sent: Tuesday, May 22, 2007 3:24 PM To: Blake Golliher Cc: Fox, Adam; Magnus Swenson; owner-toasters@mathworks.com; toasters@mathworks.com Subject: Maxdirsize
On a related note to the conversation below - What's the impact to increasing maxdirsize on a given volume? We have a qtree approaching the limit for its volume. Does maxdirsize function like maxfiles/inodes?
Jeff Mery - MCSE, MCP National Instruments
------------------------------------------------------------------------ - "Allow me to extol the virtues of the Net Fairy, and of all the fantastic dorks that make the nice packets go from here to there. Amen." TB - Penny Arcade ------------------------------------------------------------------------ -
"Blake Golliher" thelastman@gmail.com Sent by: owner-toasters@mathworks.com
05/22/2007 11:40 AM
To "Fox, Adam" Adam.Fox@netapp.com cc "Magnus Swenson" magnuss@cadence.com, toasters@mathworks.com Subject Re: running out of inodes problem
Just add more with maxfiles, and ask about netapps plan to adopt dynamic inode allocation. Which there may not be, but one can hope. :) We have a data set that constantly runs out of inodes, we just keep a close eye on it, and add more inodes when needed. We've not had an issue with mysterious loss of performance when adding inodes using maxfiles.
hope that helps,
-Blake
On 5/22/07, Fox, Adam Adam.Fox@netapp.com wrote:
It depends on why you are running out of inodes. If your dataset uses lots of little files, then increasing the disk space probably won't help much because you'll end up having a lot of space sitting idle. If there are just a few places in the data that have lots of inodes,
but
it's the exception not the rule, then adding space will probably do
the
trick.
The only caveat with adding inodes is to add them as you need them. Don't massively over-add inodes as you'll increase some structures in the filesystem that could slow down your performance unecessarily. Also keep in mind that once you increase the inodes in a volume, they cannot be decreased.
Just some thoughts on the topic.
-- Adam Fox adamfox@netapp.com
-----Original Message----- From: Magnus Swenson [mailto:magnuss@cadence.com] Sent: Tuesday, May 22, 2007 10:38 AM To: toasters@mathworks.com Subject: running out of inodes problem
Hello Toasters,
Just wanted to do a quick check, what the standard practise is when running out of inodes on a volume.
I have several flex volumes in one aggregate. One of the volumes only at 80% full ran out of inodes.
df -i will show number of inodes used and inodes free.
This is a 100G volume with 3458831 inodes.
According to now.netapp.com, there are two solutions,
increase inodes with the 'maxfiles' command, or add more disk space to the volume.
Has anybody had experience with this and which way did you go?
jeff.mery@ni.com (Jeff Mery) asks
On a related note to the conversation below - What's the impact to increasing maxdirsize on a given volume? We have a qtree approaching the limit for its volume. Does maxdirsize function like maxfiles/inodes?
Not really. maxdirsize can be altered up or down at any time. It's a safety limit on the size of any individual directory. What hurts the system (often quite badly) is actually having directories that big (maxdirsize defaults to 1% of the filer's memory size). Putting it back down will not cause an overlarge directory to shrink (although it will stop it getting any larger).
"A qtree approaching the limit for its volume" doesn't sound relevant here unless you have only one directory in the qtree, i.e. a completely flat naming scheme. That certainly isn't a good idea.
You're spot-on with the qtree comment; there is only one directory in the qtree. I also agree that the flat structure is a bad idea. Unfortunately, that's the way this particular application likes to store its files (boo!). Sounds like I'll need to beat up on the app owner to see what we can do about creating additional depot directories to thin things out a bit.
However, it's good to know that we can move this up and down as we need to (...and hopefully don't need to). Thanks for the info.
Jeff Mery - MCSE, MCP National Instruments
------------------------------------------------------------------------- "Allow me to extol the virtues of the Net Fairy, and of all the fantastic dorks that make the nice packets go from here to there. Amen." TB - Penny Arcade -------------------------------------------------------------------------
Chris Thompson cet1@cus.cam.ac.uk Sent by: owner-toasters@mathworks.com 05/22/2007 03:48 PM
To toasters@mathworks.com cc
Subject Re: Maxdirsize
jeff.mery@ni.com (Jeff Mery) asks
On a related note to the conversation below - What's the impact to increasing maxdirsize on a given volume? We have a qtree approaching
the
limit for its volume. Does maxdirsize function like maxfiles/inodes?
Not really. maxdirsize can be altered up or down at any time. It's a safety limit on the size of any individual directory. What hurts the system (often quite badly) is actually having directories that big (maxdirsize defaults to 1% of the filer's memory size). Putting it back down will not cause an overlarge directory to shrink (although it will stop it getting any larger).
"A qtree approaching the limit for its volume" doesn't sound relevant here unless you have only one directory in the qtree, i.e. a completely flat naming scheme. That certainly isn't a good idea.
This has happened quite a few times to me. Coming from an EDA environment as well, it's not uncommon for relatively small volumes to have huge numbers of files (20 million files on a 450GB volume).
maxfiles is what I use, typically adding a million files at a time. I'm not exactly sure what the algorithm Netapp uses to add inodes as you increase volume size, so I just take the more direct route. Plus I don't want to just throw space at engineers who will consume it "just because." Remember, after adding inodes, you can't decrease the number and they consume space from the volume.
-- /* wes hardin */ UNIX System Admin Dallas Semiconductor/Maxim Integrated Products
Magnus Swenson wrote:
Hello Toasters,
Just wanted to do a quick check, what the standard practise is when running out of inodes on a volume.
I have several flex volumes in one aggregate. One of the volumes only at 80% full ran out of inodes.
df -i will show number of inodes used and inodes free.
This is a 100G volume with 3458831 inodes.
According to now.netapp.com, there are two solutions,
increase inodes with the 'maxfiles' command, or add more disk space to the volume.
Has anybody had experience with this and which way did you go?
This has happened quite a few times to me. Coming from an EDA environment as well, it's not uncommon for relatively small volumes to have huge numbers of files (20 million files on a 450GB volume).
maxfiles is what I use, typically adding a million files at a time. I'm not exactly sure what the algorithm Netapp uses to add inodes as you increase volume size, so I just take the more direct route. Plus I don't want to just throw space at engineers who will consume it "just because." Remember, after adding inodes, you can't decrease the number and they consume space from the volume.
Looks like by default you get 1 inode for every filesystem data block (4K block size). This would be plenty if each file consumed at least one 4K block. But files 64 bytes or smaller are stored entirely in the inode and therefore do not consume any data blocks. Rather than allocate an entire data block for so little data, WAFL places the data in the inode where the pointers to the file's data blocks are ordinarily stored.
So if you have a lot of files under 65 bytes long, then you need inodes for them, but no data blocks, so increase maxfiles. (Often symbolic links are short enough to fit in the inode.) You may still want to grow the volume a little to provide room for the new inodes. The inode table is stored in an invisible "meta file" within the volume. The root of the entire volume is the inode for the inode file. The location of everything else in the volume is stored in the inode file.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
scl@sasha.acc.virginia.edu (Stephen C. Losen) writes:
Looks like by default you get 1 inode for every filesystem data block (4K block size).
That was the old scheme, where the default was 1 inode / 4KB and the minimum allowed 1 inode / 32KB (roughly). These days the default (and minimum) for a flexible volume is the old minimum. That's probably why people run out of inodes more often than they used to.
Adam.Fox@netapp.com (Adam Fox) writes:
The only caveat with adding inodes is to add them as you need them. Don't massively over-add inodes as you'll increase some structures in the filesystem that could slow down your performance unecessarily.
I think the word that should be emphasized there is "massively". It's no more sensible to have your inode metafile always nearly full than to have your aggregates/traditional-volumes in that state. There are overheads that increase if you do.
There is a hidden 5% reserve in the inode metafile (i.e. it is really 20/19 times the maxfiles value, as you can see by looking at the inode numbers actually used) which is meant to stop inode allocation going exponential on you (like the 10% space reserve in an aggregate). That doesn't mean that operating at the extreme limit allowed is ideal.
There was a major change to the inode allocation algorithm (sometime early in ONTAP 6.x, I think) which substantially improved the performance with the inode metafile nearly full. (It had some deleterious effects in other contexts, though, as I might get around to posting about one of these years.) But there's no point in stressing it unnecessarily.