Thats correct, I did that yesterday. (with help from Netapp) After you wipe disk labels in maintenance mode, the disks will appear with status "label broken". Boot back into normal mode and set a new label with "disk unfail -s <diskid>". After that the disks appear as spare disks and are ready to use.
-SF
Strickland, Michael schrieb:
From Ontap, "disk unfail -s<disk>" should work in this situation. If
the disks are "label broken" I believe. Assuming that there is no needed data on said disks.
- Michael NGS
-----Original Message----- From: Stefan Funke [mailto:bundy@usage.de] Sent: Thursday, September 18, 2008 11:26 AM To: Michael Schipp Cc: Christian Mikovits; toasters@mathworks.com Subject: Re: bad raid label version
Hi! I've had the same problem. NetApp suggests:
- Lookup the disks which show the error,
- Reboot the filer and press ctrlC,
- From the 1-5 menu, choose option 5 to boot into maintenance mode,
- Clear the label on those disks by typing: label wipe<disk_ID>
Note: For Data ONTAP 7.2, use label makespare instead of label wipe for new disks.
Example: *> label makespare 6b.69 label makespare: Disk 6b.69 forced to be a SPARE disk
Michael Schipp schrieb:
"Label wipe" I think
From diag mode
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Christian Mikovits Sent: Wednesday, 17 September 2008 7:19 AM To: toasters@mathworks.com Subject: bad raid label version
hey guys.
i've setup a filer with 7.3 but now i want to revert to 7.2;
everythings fine, but 2 disks remain with "raid label" 9 an are unusable.
revert_to isn't working; disk fail/unfail is impossible (is not a file
system disk);
scsi format isn't working either...
any suggestions?
greets,
chris
************** This footnote confirms that this email message has been
scanned by PineApp Mail-SeCure for the presence of malicious code, vandals& computer viruses.