So the maxfiles table is like a sparse file (wow, haven't used
that term since my Apple II days) without an inode number?
Yep. Check out the WAFL paper under the "technology" / "architecture" section of our web page for details.
All of the meta-data in WAFL is stored in hidden files with inode numbers in the range 32-63, except for the inode file itself, whose inode is stored at a fixed location on disk where it can be found at boot time. (Multiple fixed locations, actually.)
I rezeroed all the filers again, and watched the df output after
each stage of setting up a filer. They all started out the same, but then the disk usage appears to jump when the first snapshot is created.
Oh -- another meta-data file that starts out sparse is the blkmap file, which keeps track of which blocks are used in the active filesystem and in the snapshots.
The first time you create a snapshot, WAFL marches through the whole blkmap file, copying the active filesystem bitplane into the snapshot bitplane, which faults in the whole file. (If you don't create a snapshot, then the blkmap file will be faulted in over time as WAFL scans through the disks allocating space for newly written data.)
That file is one MB per GB of disk space, so after the first snapshot in a brand new 100 GB filesystem, you should lose about 100 MB.
Anyway, just more of my curious musings, in my attempts to see how
this Netapp contraption works. ;-)
Black-box reverse engineering at it's best. You remind me of some of the engineers here at NetApp. "Hey! Check out THIS new way to kill a filer..."
NetApp Engineering: Where sadism is job #1.
Dave