Hey all,

We had a number of volume creates happen on our 8.0.3  (admittedly ancient, but not upgraded due to lack of support) c-mode cluster, during a period of excessive load, when the target filer/aggregate was having some sort of problem.

This led to the following type of entries in syslog:

Feb 27 19:06:34 filer2a mgwd: mgmtgwd.jobmgr.jobcomplete.failure: Job "Vol Create" [id 208390] (Create v181771) completed unsuccessfully: Failed to create the volume on node. Reason: Node is not healthy. This may leave the volume record in the Volume Location Database. (1).

Whatever problem this node was having at the time corrected itself shortly thereafter.

This appears to have not created the actual volume on disk, but has also indeed left an orphaned record of that volume in the VLDB.

If I do a 'vol show -volume v181771' it comes back with:

                    Virtual Server Name: dd
                            Volume Name: v181771
                         Aggregate Name: near3a2
                            Volume Size: -
                     Volume Data Set ID: 22798575
              Volume Master Data Set ID: 2170062839
                           Volume State: -
                            Volume Type: -
                           Volume Style: flex
                       Volume Ownership: cluster
                          Export Policy: default
                                User ID: -
                               Group ID: -
                         Security Style: -
                       Unix Permissions: -
                          Junction Path: -
                   Junction Path Source: -
                        Junction Active: -
                          Parent Volume: -
                                Comment:
                         Available Size: -
                             Total Size: -
                              Used Size: -
                        Used Percentage: -
    Total Files (for user-visible data): -
     Files Used (for user-visible data): -
                  Space Guarantee Style: -
              Space Guarantee In Effect: -
Percent of Space Reserved for Snapshots: -
       Used Percent of Snapshot Reserve: -
                        Snapshot Policy: daily
                          Creation Time: -
            Anti-Virus On-Access Policy: -
       Inconsistency in the file system: -



But if I do a 'node run local vol status v181771' on the node that houses the aggregate in question, it says no volume exists.

 'set diag; volume lost-found show' doesn't return any results either.

So it would appear we do indeed have a number of bogus volume entries in the VLDB.

Doing a 'vol delete v181771' yields:

ERROR: command failed on entry "dd v181771": Unable to lookup volume attributes for volume v181771. Reason: success

Question is, is there a sane/safe way to clear these out of the VLDB?

Thanks for any help!