I've been reading the 6.0.1R1 "Start Here" document (for advance planning: we're not about to move to it) and especially the "New features in 6.0.1" section.
I wonder whether someone can explain how "block checksum" disks (and volumes) differ from "zoned checksum" ones, at a slightly more technical level than "block checksum disks are the ones with yellow front bezel ID labels and the orange system requirements labels on the side"? :)
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.
Here is a small right up.This should be on NOW tomorrow.
================
RAID implementation in Data ONTAP 6.0 and later ensure reliability by allocating every 64th block to store checksum data for the previous 63 blocks. This is referred to as 'Zone Checksums'. Data ONTAP 6.0.1R1 can also use a new RAID data checksum type to improves the efficiency of disk access. The new type is known as Block checksums . The block checksum feature is available on Fibre Channel StorageShelf F9 disk shelves, using disks formatted for block checksums.
Block Checksums is a volume level support and requires each disk to format to 520 bytes per sector, as opposed to the 512 bytes per sector formatting . The extra space is broken into two parts. The first is 4,096 bytes of file system data. The remaining 64 bytes contain the checksum data for the previous 4.096 bytes. In this manner the checksum is appended to each block of data.
The Block Checksums are supported in pre-formatted, 18GB,36GB and 72GB drives only on FC9 shelves.Note that this format is different than the drives shipped with previous versions of Data ONTAP OS. While the new drives can be added to a Zone Checksum Volumes the reverse is not true.
=================
Thanks!
-Puneet ___________
Chris Thompson wrote:
I've been reading the 6.0.1R1 "Start Here" document (for advance planning: we're not about to move to it) and especially the "New features in 6.0.1" section.
I wonder whether someone can explain how "block checksum" disks (and volumes) differ from "zoned checksum" ones, at a slightly more technical level than "block checksum disks are the ones with yellow front bezel ID labels and the orange system requirements labels on the side"? :)
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.
Yes I did need to spell check this :-)
Apart from that one clarification - Its "FC9 disk shelves" and not "F9 disk shelves". There are no F9 shelves.
Thanks!
-Puneet
PS: Oh yeah, its a "Write Up"..:-) ___________
Puneet Anand wrote:
Here is a small right up.This should be on NOW tomorrow.
================
RAID implementation in Data ONTAP 6.0 and later ensure reliability by allocating every 64th block to store checksum data for the previous 63 blocks. This is referred to as 'Zone Checksums'. Data ONTAP 6.0.1R1 can also use a new RAID data checksum type to improves the efficiency of disk access. The new type is known as Block checksums . The block checksum feature is available on Fibre Channel StorageShelf F9 disk shelves, using disks formatted for block checksums.
Block Checksums is a volume level support and requires each disk to format to 520 bytes per sector, as opposed to the 512 bytes per sector formatting . The extra space is broken into two parts. The first is 4,096 bytes of file system data. The remaining 64 bytes contain the checksum data for the previous 4.096 bytes. In this manner the checksum is appended to each block of data.
The Block Checksums are supported in pre-formatted, 18GB,36GB and 72GB drives only on FC9 shelves.Note that this format is different than the drives shipped with previous versions of Data ONTAP OS. While the new drives can be added to a Zone Checksum Volumes the reverse is not true.
=================
Thanks!
-Puneet ___________
Chris Thompson wrote:
I've been reading the 6.0.1R1 "Start Here" document (for advance planning: we're not about to move to it) and especially the "New features in 6.0.1" section.
I wonder whether someone can explain how "block checksum" disks (and volumes) differ from "zoned checksum" ones, at a slightly more technical level than "block checksum disks are the ones with yellow front bezel ID labels and the orange system requirements labels on the side"? :)
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.
Block Checksums is a volume level support and requires each disk to format to 520 bytes per sector, as opposed to the 512 bytes per sector formatting . The extra space is broken into two parts. The first is 4,096 bytes of file system data. The remaining 64 bytes contain the checksum data for the previous 4.096 bytes. In this manner the checksum is appended to each block of data.
Oh, MAN, I'm having a flashback...
21 years ago - and boy, does it seem *weird* to write that - my first programming experience was with a very funky old machine called the PERQ. (Some of you in England may have amusing stories of your own regarding the old beast -- think ICL, "Common Base Programme", RAL, etc.) Anyway, the PERQ was the first commercially available machine that we call a "workstation", i.e., the first thing outside of Xerox PARC to be sold to the public with a meg of memory, a MIP of computing power, and a million pixel display.
The PERQ also featured a very oddly hacked hard disk controller, which was custom-built to drive an old Shugart 14" Winchester (24 whole megabytes, baby! 85ms avg. access! Wooo!). The disk controller wrote filesystem data into a separate block header along with 512 bytes of data. This was the only way you could maintain any prayer of keeping your filesystem intact, given how flaky the hardware tended to be; but because of that extra header, you could blow away your free list, trash your partition info blocks, hell, practically drag a nail across the platters and the old "scavenger" program could still reconstruct your filesystem just by walking each block and patching it all back together from the headers. Pretty cool stuff, for 1980!
Nowadays I could emulate the PERQ - main memory, all the devices, and the entire filesystem - just in the L2 cache of my E4500. Wow.
Yeah, so it isn't exactly on-topic, but heck, it's Friday and I'm waxing nostalgic. :-)
Musing somewhat related to toasters: Like GigE pushing against the limits of a 1500 byte MTU, isn't it about time we start moving to 1K or larger block sizes on our SCSI drives? In the age of FibreChannel and Ultra160, shuffling all those wee blocks about seems silly, given that filesystems are using 1K, 2K, 4K and larger logical block sizes. A performance boost could be gained on the old NeXT boxes by using 1K physical blocks on drives that could support it, and that was... gads... a decade a ago. Ahhh, stinky old hardware. I love it!
Cheers,
-- Chris
-- Chris Lamb, Unix Guy (and PERQ Fanatic) MeasureCast, Inc. 503-241-1469 x247 skeezics@measurecast.com