Okay, can someone help explain to me what happens to volumes between Data ONTAP 5.x and 6.x with respect to the upgrade steps and times.
In our situation, we're vol copy'ing data from 5.3.7R3 to 6.1R1. The first annoyance is that the destination filer (6.1R1) says the following:
Mon Jun 11 15:57:13 PDT [turbine: rshd_0:error]: Filesystem version mismatch for destination volume vol2_2, reverting the version and aborting transfer. A console message will be displayed when this revert is complete.
The fabled console message it describes never happens. Even so, please just send me a message that I can use syslog.conf to route to whichever logging facility I please. Sending something to the console only without possibility of configuring it to go somewhere else makes me very sad since I'm not going to be watching the console for hours at a time.
Does anyone know how long the revert process takes?
Our next problem is that now that we've vol-copied, the destination volume wants to upgrade itself. Again, does anyone know what the steps are and how long they take to upgrade?
I found the following information in "vol status -c":
turbine*> vol status -c Volume Checksum style Checksum status vol0_1 zoned Checksums active root zoned Checksums active vol0_2 zoned Checksums active vol2_2 zoned Checksums initializing: RAID upgrade phase 1 (of 2) in progress vol2_1 zoned Checksums initializing: WAFL upgrade in progress
(BTW, wouldn't alphanumeric sorting be nice in output like this?)
So is it the case that we have 2 RAID upgrade phases and then a WAFL upgrade? Are there more or less steps and what order do they come in? How long does each one take?
Assuming the time is a question of the amount of data and beef in the server, we're talking about F840's working on volumes that are about 500GB big with 200-400GB of data on them.
Thanks fellow toasters!
-- Jeff
-- ---------------------------------------------------------------------------- Jeff Krueger, NetApp CA E-Mail: jeff@qualcomm.com Senior Engineer Phone: 858-651-6709 NetApp Filers / UNIX Infrastructure Fax: 858-651-6627 QUALCOMM, Inc. IT Engineering Web: www.qualcomm.com
jkrueger@qualcomm.com (Jeffrey Krueger) asks
[Questions about reverting which I can't help with as I've not done this]
Our next problem is that now that we've vol-copied, the destination volume wants to upgrade itself. Again, does anyone know what the steps are and how long they take to upgrade?
I found the following information in "vol status -c":
turbine*> vol status -c Volume Checksum style Checksum status vol0_1 zoned Checksums active root zoned Checksums active vol0_2 zoned Checksums active vol2_2 zoned Checksums initializing: RAID upgrade phase 1 (of 2) in progress vol2_1 zoned Checksums initializing: WAFL upgrade in progress
(BTW, wouldn't alphanumeric sorting be nice in output like this?)
So is it the case that we have 2 RAID upgrade phases and then a WAFL upgrade? Are there more or less steps and what order do they come in? How long does each one take?
Assuming the time is a question of the amount of data and beef in the server, we're talking about F840's working on volumes that are about 500GB big with 200-400GB of data on them.
The sequence of states shown by "vol status -c" during upgrading goes like this:
1. "Checksums initializing: WAFL upgrade in progress". This is while it is still moving blocks in the active filing system away from where the checksums are going to go. This is done as a lowish-priority process to avoid impacting performance too much. For us, it took about 4 hours on a 6+1 x 18GB volume on an F740.
2. "Checksums initializing: WAFL upgrade will complete when the following snapshot(s) are deleted: ...". This is while it is waiting for all snapshots created before the end of the phase above to go away (as they still have blocks allocated in the checksum positions, which can't be moved). How long this goes on depends on your snapshot schedule, obviously.
3. "Checksums initializing: RAID upgrade phase 1 (of 2) in progress". This part took under 40 minutes on the volume mentioned before.
4. "Checksums initializing: RAID upgrade phase 2 (of 2) will be performed during next disk scrub". Persists until the next scrub - either wait for the weekly one (unless you've disabled it), or start one off with "disk scrub start" at a quiet time.
5. "Checksums initializing: RAID upgrade phase 2 (of 2) in progress". The scrub is running - it doesn't take noticably longer on this occasion that on any other (although there is more disk write activity, I think).
6. "Checksums active". Self-explanatory. :-)
The two "RAID upgrade" phases between them get the zone checksums into the right state. Technically speaking, I don't understand the divison of labour between them.
All this is independent for each volume, of course.
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.