This has happened quite a few times to me. Coming from an EDA environment as well, it's not uncommon for relatively small volumes to have huge numbers of files (20 million files on a 450GB volume).
maxfiles is what I use, typically adding a million files at a time. I'm not exactly sure what the algorithm Netapp uses to add inodes as you increase volume size, so I just take the more direct route. Plus I don't want to just throw space at engineers who will consume it "just because." Remember, after adding inodes, you can't decrease the number and they consume space from the volume.
Looks like by default you get 1 inode for every filesystem data block (4K block size). This would be plenty if each file consumed at least one 4K block. But files 64 bytes or smaller are stored entirely in the inode and therefore do not consume any data blocks. Rather than allocate an entire data block for so little data, WAFL places the data in the inode where the pointers to the file's data blocks are ordinarily stored.
So if you have a lot of files under 65 bytes long, then you need inodes for them, but no data blocks, so increase maxfiles. (Often symbolic links are short enough to fit in the inode.) You may still want to grow the volume a little to provide room for the new inodes. The inode table is stored in an invisible "meta file" within the volume. The root of the entire volume is the inode for the inode file. The location of everything else in the volume is stored in the inode file.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support