We had a disk fail in a shelf on a filer with expired support, which
created a case at the time but it was closed without replacement due to
lack of support on the filer. However, as part of the support agreement
on a new cluster, it was in a shelf that would be covered when attached
to one of the new filers. Now that we have that shelf attached to the
new filer, all I get is:
Mon Sep 22 17:18:22 EDT [hostname: disk.init.failureBytes:error]: Disk
0b.23 failed due to failure byte setting. I tried replugging the disk
and all I got was the same message in the log. After the message in the
log, the filer acts like the disk isn't present as per commands below.
How can I trick it intro triggering an autosupport so my bad disk gets
replaced? Thanks.
> disk fail 0b.23
disk fail: Disk 0b.23 not found
> disk show 0b.23
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------