It depends on why you are running out of inodes. If your dataset uses lots of little files, then increasing the disk space probably won't help much because you'll end up having a lot of space sitting idle. If there are just a few places in the data that have lots of inodes, but it's the exception not the rule, then adding space will probably do the trick.
The only caveat with adding inodes is to add them as you need them. Don't massively over-add inodes as you'll increase some structures in the filesystem that could slow down your performance unecessarily. Also keep in mind that once you increase the inodes in a volume, they cannot be decreased.
Just some thoughts on the topic.
-- Adam Fox adamfox@netapp.com
-----Original Message----- From: Magnus Swenson [mailto:magnuss@cadence.com] Sent: Tuesday, May 22, 2007 10:38 AM To: toasters@mathworks.com Subject: running out of inodes problem
Hello Toasters,
Just wanted to do a quick check, what the standard practise is when running out of inodes on a volume.
I have several flex volumes in one aggregate. One of the volumes only at 80% full ran out of inodes.
df -i will show number of inodes used and inodes free.
This is a 100G volume with 3458831 inodes.
According to now.netapp.com, there are two solutions,
increase inodes with the 'maxfiles' command, or add more disk space to the volume.
Has anybody had experience with this and which way did you go?