The "substantial time increase" you mention is most likely caused by the creation of the snapshots before each individual backup. With large numbers of directories, this can really add up (as it has for us). We've been waiting on Netapp to fix bug #15998, which will allow us to directly dump the pre-existing snapshots *and* have /etc/dumpdates updated correctly :P You can work around it by manually editting /etc/dumpdates and stripping out the .snapshot portion of "/vol/<volume>/.snapshot/<identifier>" but we were hoping they'd have fixed it by now. Anyway, dumping the pre-existing snapshots speeds things up immensely with large numbers of dump requests. Hope that helps.
Bug #15998
TITLE: ndmpd updates the dumpdates table incorrectly when dumping pre-existing snapshots
DESCRIPTION: In 5.3, if a dump of a pre-existing snapshot (such as /vol/vol0/.snapshot/hourly.0) is used, the dumpdates table is updated without the ".snapshot/hourly.0". It should only show:
/vol/vol0/ 0 Sun Jan 1 00:00:00 1999
However, ndmpd prints out
/vol/vol0/.snapshot/hourly.0 0 Sun Jan 1 00:00:00 1999
In message 37FA25BA.82769EDE@lucent.com, Dave Heiland writes:
We're using Netbackup to backup our F760 filer via NDMP to a directly connecte d STK robot. Full backups take about 8 hours - we tell Netbackup to backup the entire filer. Thats just about ok, but the problem is that restores of user files can also take up to 8 hours which is a bit long. The problem may get ev en worse as we might be moving from 70-80GB of user space to about 200GB, which would mean any restores would stop the night's backup and take far too long.
Does anyone have any suggestions as to how to rearrange the backups to reduce this time?. We've considered backing up individual directories, but the backu p time increases substantially.
Dave
David J. Heiland Lucent Technologies Tel: +44 (0)1666 83-2504 dheiland@lucent.com CIO Department Malmesbury, England