Hi!
We have Budtool 4.6.1, and are currently looking for a replacement. However...
I am bacing up a single volume filer with nearly 200GB of data - my backup is constanlty failing due to the process running out of time..anyone seen this? Is there a way round it? Can I increse the allowable time? Can I split the backup into several levels?
HELP!
Simon
Simon Clawson Renoir Group Systems Administrator Mentor Graphics Uk Rivergate London Road Newbury Berkshire RG14 2QB <<Clawson, Simon.vcf>>
Simon,
Please look at the file
$BTHOME/bud/goserver.end.filter
and look for the string BT_REQ_CMDTIMEOUT
Changing it to the number of seconds that you need will solve your problem.
Paul Lupa
"Clawson, Simon" wrote:
Hi!
We have Budtool 4.6.1, and are currently looking for a replacement. However...
I am bacing up a single volume filer with nearly 200GB of data - my backup is constanlty failing due to the process running out of time..anyone seen this? Is there a way round it? Can I increse the allowable time? Can I split the backup into several levels?
HELP!
Simon
Simon Clawson Renoir Group Systems Administrator Mentor Graphics Uk Rivergate London Road Newbury Berkshire RG14 2QB <<Clawson, Simon.vcf>>
. . . I am bacing up a single volume filer with nearly 200GB of data - my backup is constanlty failing due to the process running out of time..anyone seen this? Is there a way round it? Can I increse the allowable time? Can I split the backup into several levels? . . .
Simon,
I see that you've probably solved your timeout problem, but you've probably got another problem as well. I've just upgraded to bt-4.6.1a, and the Release Notes state that it does not support backups that get split across more than 3 tapes.
I'll spare you the details right now, since you can go look at my original posting to "toasters" on this topic: http://teaparty.mathworks.com:1999/toasters/6612.html
Apparently a large backup of this type will work fine, but you may not be able to do recoveries from it, under certain mysterious circumstances (which I've not yet experienced, thankfully).
The workaround is to break up that single backup into backups of subdirectories of that large volume (you don't have to breakup the volume itself). Solaris ufsdump can do this, but doesn't support incremental (non-full) dumps of subdirectories; Fortunately (by the tests I've done here), NetApp's dump (both console/cmd-line and NDMP) can do both fulls and incrementals of subdirectories.
I've since noticed that NetApp themselves recommend against backing up large volumes (>100GB) in a single "dump" run. Here's where:
http://now.netapp.com/NOW/knowledge/docs/ontap/rel536r2/html/sag/dump7.htm
Luckily for us, our large volume (185GB, currently) is broken up into about ten qtrees & a couple other directories. I think the 10-12 dumps actually complete slightly faster than the previous single large dump.
BTW, how's it going in your search for a BudTool replacement? A PDC engineer told me that the new Legato Networker 6.0 NDMP support will only do backups to either a NetApp-attached library, or to another NDMP enabled system (and _not_ to a Networker media server). It doesn't support a split-library configuration at all (which is what we have our BudTool/NetApp setup doing). Because of that, they want to sell me Veritas NetBackup instead.
I've got a few other leads in the works, and I'd be happy to compare notes on what you've found so far.
Good luck and regards,
Marion Hakanson wrote:
. . . I am bacing up a single volume filer with nearly 200GB of data - my backup is constanlty failing due to the process running out of time..anyone seen this? Is there a way round it? Can I increse the allowable time? Can I split the backup into several levels? . . .
Simon,
I see that you've probably solved your timeout problem, but you've probably got another problem as well. I've just upgraded to bt-4.6.1a, and the Release Notes state that it does not support backups that get split across more than 3 tapes.
I'll spare you the details right now, since you can go look at my original posting to "toasters" on this topic: http://teaparty.mathworks.com:1999/toasters/6612.html
Apparently a large backup of this type will work fine, but you may not be able to do recoveries from it, under certain mysterious circumstances (which I've not yet experienced, thankfully).
The workaround is to break up that single backup into backups of subdirectories of that large volume (you don't have to breakup the volume itself). Solaris ufsdump can do this, but doesn't support incremental (non-full) dumps of subdirectories; Fortunately (by the tests I've done here), NetApp's dump (both console/cmd-line and NDMP) can do both fulls and incrementals of subdirectories.
I've since noticed that NetApp themselves recommend against backing up large volumes (>100GB) in a single "dump" run. Here's where:
http://now.netapp.com/NOW/knowledge/docs/ontap/rel536r2/html/sag/dump7.htm
Luckily for us, our large volume (185GB, currently) is broken up into about ten qtrees & a couple other directories. I think the 10-12 dumps actually complete slightly faster than the previous single large dump.
BTW, how's it going in your search for a BudTool replacement? A PDC engineer told me that the new Legato Networker 6.0 NDMP support will only do backups to either a NetApp-attached library, or to another NDMP enabled system (and _not_ to a Networker media server). It doesn't support a split-library configuration at all (which is what we have our BudTool/NetApp setup doing). Because of that, they want to sell me Veritas NetBackup instead.
I've got a few other leads in the works, and I'd be happy to compare notes on what you've found so far.
Good luck and regards,
-- Marion Hakanson hakanson@cse.ogi.edu CSE Computing Facilities
Marion,
We're running BudTool461-2Solaris on an E450 with an L11000 and one of our schedules is comprised of one ndmp request from one of our F740s which is 190+GB in size. A normal backup for this request spans between 8-10 tapes appending to and overwriting DLT7000s as necessary and spans between 6-7 tapes on tape copies.
We haven't had any problems dumping this request, however, if I'm reading your mail correctly, are you suggesting we would have problems restoring this request because of the number of tapes that was spanned ?? Or was your message specific to 4.6.1a ?
Also, as far as breaking up large filesystems, on our other F740 we have our home directory tree comprised of 180-200 user dirs. We used to dump the home filesystem as one, then we changed the request within BudTool to reflect individual requests for the user dirs. Because the file system is what it is, we have some level of change in it, so we constructed a script to discover new home subdirs within this filesystem and automatically update the .buddb as necessary. We haven't had any problems with lev 0's nor lev 1's.
Also, what helped to speed up all our ndmp requests was to backup the hourly.0 snapshots.
We're running BudTool461-2Solaris on an E450 with an L11000 and one of our schedules is comprised of one ndmp request from one of our F740s which is 190+GB in size. A normal backup for this request spans between 8-10 tapes appending to and overwriting DLT7000s as necessary and spans between 6-7 tapes on tape copies.
We haven't had any problems dumping this request, however, if I'm reading your mail correctly, are you suggesting we would have problems restoring this request because of the number of tapes that was spanned ?? Or was your message specific to 4.6.1a ?
Yes, it's evidently the restore which will fail if the backup image spans more than three tapes. Coincidentally, I just heard some more details from PDC today. They say that the problem ("limitation") was discovered when testing for the 4.6.1 release, and that it has to do with the Tapes DB or File History DB not handling the large quantities properly somehow. They didn't say so, but it sounds to me like if that's the case, it might be possible to do a manual restore, bypassing BudTool, if you knew on which tape(s) to find the data. Let's see Legato NetWorker do that!
As far as I can tell, 4.6.1a is what was released, but it gets called "4.6.1" here and there.
Thanks for the other info; We seem to be doing OK with dumping the individual qtree's, both before and after upgrading to both the latest BudTool and the recommended ONTAP release (5.3.6R2). That oughta hold us 'til we replace BudTool.
Regards,