Hi!
Thanks for the info! Scary stuff this backup! I have just checked the logs to find that the backup ran for 32 hours this weekend!
Anyway, How have you achived the spliting of your file system? I think this is going to cause us problems due to the rather unitelligent autoloader we use to drive the backup...any advise on this?
Simon
-----Original Message----- From: Marion Hakanson [mailto:hakanson@cse.ogi.edu] Sent: 01 September 2000 18:24 To: Clawson, Simon Cc: toasters@mathworks.com Subject: Re: Budtool woes
. . . I am bacing up a single volume filer with nearly 200GB of data - my backup is constanlty failing due to the process running out of time..anyone seen this? Is there a way round it? Can I increse the allowable time? Can I
split
the backup into several levels? . . .
Simon,
I see that you've probably solved your timeout problem, but you've probably got another problem as well. I've just upgraded to bt-4.6.1a, and the Release Notes state that it does not support backups that get split across more than 3 tapes.
I'll spare you the details right now, since you can go look at my original posting to "toasters" on this topic: http://teaparty.mathworks.com:1999/toasters/6612.html
Apparently a large backup of this type will work fine, but you may not be able to do recoveries from it, under certain mysterious circumstances (which I've not yet experienced, thankfully).
The workaround is to break up that single backup into backups of subdirectories of that large volume (you don't have to breakup the volume itself). Solaris ufsdump can do this, but doesn't support incremental (non-full) dumps of subdirectories; Fortunately (by the tests I've done here), NetApp's dump (both console/cmd-line and NDMP) can do both fulls and incrementals of subdirectories.
I've since noticed that NetApp themselves recommend against backing up large volumes (>100GB) in a single "dump" run. Here's where:
http://now.netapp.com/NOW/knowledge/docs/ontap/rel536r2/html/sag/dump7.htm
Luckily for us, our large volume (185GB, currently) is broken up into about ten qtrees & a couple other directories. I think the 10-12 dumps actually complete slightly faster than the previous single large dump.
BTW, how's it going in your search for a BudTool replacement? A PDC engineer told me that the new Legato Networker 6.0 NDMP support will only do backups to either a NetApp-attached library, or to another NDMP enabled system (and _not_ to a Networker media server). It doesn't support a split-library configuration at all (which is what we have our BudTool/NetApp setup doing). Because of that, they want to sell me Veritas NetBackup instead.
I've got a few other leads in the works, and I'd be happy to compare notes on what you've found so far.
Good luck and regards,
. . . Anyway, How have you achived the spliting of your file system? I think this is going to cause us problems due to the rather unitelligent autoloader we use to drive the backup...any advise on this?
Simon
Our largish volume has about a dozen directories at its top level; Most of them are qtree's, and the space is split relatively evenly across those 12 directories. So rather than backing up just /vol/projects/, I tell BudTool to backup /vol/projects/dir1/, then /vol/projects/dir2/, and so on.
It's a slightly ugly workaround, with the biggest risk being that someone will add another subdirectory without telling BudTool to back it up. That's not a big concern here, since those qtree definitions are pretty static.
Note that most Unix-based "dump" commands won't be able to do incremental backups on a sub-directory of a filesystem. But tests show that NetApp's "dump" (and also the NDMP dump) appear to be doing incrementals of the subdiretories just fine. The "right thing" even shows up in the NetApp's /etc/dumpdates file.
Regards,