Is there an issue with manually failing disks on say an unused Volume for testing purposes? IE: I want to test my spare disks and also observe volume rebuild time.
If I manually fail a disk I remember that NetApp disks store their failure data. Do I risk maxing out an internal "number of failures" register on the disks if I manually fail a disk several times?
DOT Version: 6.3.1 Disks: DS14 - 72 Gig Type/Firmware: X235_SCHT5073F10 NA04
TIA
-Rob
most of the time, i don't use the disk fail command to test, let's say a spare i just swap out a data (or parity) disk from the filer as the filer make up his mind there is a missing disk, it starts to reconstruct on the spare
the good point with this is that my disk is not failed i can reinsert it later into the filer
the filer recognize the disk as part of the volume but identify its "timestamp" to be obsolete, so it turns it as a spare disk and everything runs fine without soft broken a disk
Robert Borowicz wrote:
Is there an issue with manually failing disks on say an unused Volume for testing purposes? IE: I want to test my spare disks and also observe volume rebuild time.
If I manually fail a disk I remember that NetApp disks store their failure data. Do I risk maxing out an internal "number of failures" register on the disks if I manually fail a disk several times?
DOT Version: 6.3.1 Disks: DS14 - 72 Gig Type/Firmware: X235_SCHT5073F10 NA04
TIA
-Rob