Rob,
I remember testing the process of failing disks on our test filer - an F87. When we did this, the filer wrote something somewhere (I can't remember offhand) and we had to trade in the disk. We called NetApp when we tried to re-use the disk and the system would not accept it. NetApp told us the only way for us to continue was to actually replace the disk.
Bryan
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com]On Behalf Of Robert Borowicz Sent: Monday, October 20, 2003 9:42 AM To: toasters@mathworks.com Subject: Manually failing disks
Is there an issue with manually failing disks on say an unused Volume for testing purposes? IE: I want to test my spare disks and also observe volume rebuild time.
If I manually fail a disk I remember that NetApp disks store their failure data. Do I risk maxing out an internal "number of failures" register on the disks if I manually fail a disk several times?
DOT Version: 6.3.1 Disks: DS14 - 72 Gig Type/Firmware: X235_SCHT5073F10 NA04
TIA
-Rob
bryan_bahnmiller@agilent.com wrote:
Rob,
I remember testing the process of failing disks on our test filer - an F87. When we did this, the filer wrote something somewhere (I can't remember offhand) and we had to trade in the disk. We called NetApp when we tried to re-use the disk and the system would not accept it. NetApp told us the only way for us to continue was to actually replace the disk.
to see failed disks:
vol status -f
to bring back a "known good" disk:
priv set advanced disk unfail <DISK>
I manually fail disks, often on R100's, to alter what disks belong in which raid groups (it matters on the R100, for performance and reliability).
be sure to turn off autosupport before you "disk fail <DISK>", unless you want to open a case with NetApp ;-)
-skottie