A few questions maybe someone could answer:
- Why does a NetApp scrub the disk when it is in degraded mode?
You've got me stumped. Sounds like a bug?
- Is the WAFL block size still 4k? I seem to remember that it had grown, but all of the online documentation says 4k.
It's still 4K.
- What is the chunk size in NetApp RAID (using the term "chunk" to mean the amount of data written on each disk in each stripe)?
This question is a little tricky, because of the way NetApp's WAFL and RAID are implemented together.
Each strip is just 4 KB times however many disks you have.
However, WAFL understands the underlying RAID geometry, and as it is write allocating a given file, it makes sure to put a certain number of blocks (32KB worth) down one disk before moving on to the next disk.
That is, if you write a 96 KB file, WAFL will put the first 32KB down one disk, the next 32KB down a second disk, and the final 32KB on a third disk. The idea here is to reduce seeking for small files, by keeping the data on the same disk, but to increase bandwidth for large files by getting multiple heads working at once. Today's high speed networks are faster than a single disk.
Most RAID subsystems have no concept of a file, and most filesystems have no concept of the RAID geometry, so they define chunks by altering the block numbering scheme on the disks so that a certain number of physical block numbers in a row are on one disk before moving to another. This does tend do distribute blocks in a file over several disks, but not with the precision of WAFL's technique.
This is complicated. I hope I'm making sense.
In our most current internal releases, the 32 KB "write chunk" size of a file that goes to a single disk has been increased to 96 KB. It may be that this is what you heard about that made you think WAFL's block size had increased.
Dave