Hi, We have an application where we need to store between 4 to 20 million small files on the on a large drive or a 3 drive raid system.
Hi Maren
A) How small is "small"? All current filesystems have the ability to store very small data files without using any additional datablocks by storing the filecontent in the file-inode itself. This sizelimit varies on the different filesystems... Therefore we need to know the size of your "small" files. Less than 64 Bytes? Less than x Bytes?
B) I now assume that your database produces files with "ugly" appr. 100 bytes/file ...
Filesystems are hierarchical databases - to usualy store big(!) amounts of data - using the filenames (incl. the path) as primary keys to relocate the stored data. Another focus is the multiuser management. Therefore I dare to declare, that any filesystem you might choose will be the "wrong" kind of database for the type of data (small data) your application produces.
=> Do you have the possibility to change the application?
If yes: 1) Is it possible to collect the data of multiple files to single files?
2) How about changing the application to use a "real" dedicated database for managing all these small data records? There are many databases that can handle even many small entries with lower response times than hierarchical filesystems. And these database files could be stored on a NetApp Filer... :-)
If no: 3) For Linux or IRIX, I would choose XFS. It's solid like a rock and flexible like a rubber band incl. a very dynamic flexible inode-management. As long a there is some space in the filesystem left, it will automatically create new inodes if required. But there is no inbuild version control like WAFL offers. :-(
Assuming the worst case 100 bytes/file scenario, my personal choice would be (2). "Back to the filers." ;-)
Smile & regards Dirk