A few mails condensed here:
On Tue 15 Feb, 2000, Jay Orr orrjl@stl.nexen.com wrote:
My understanding of Budtool and Quick Restore was that you could pull tar files off of the tape directly. Generally, the desire for this ability is:
- if the machine with the backup software goes down, you can still do
recoveries, and 2) you can restore to differing systems.
That was what used to be my line on tape formats too. Then I investigated how Legato could be used in the case of the backup server itself dying.
The key is to know (have writtenfor yourself) a procedure for rebuilding a backup server efficiently. I used to think Legato was a pain at this, but it turns out that if you set up a server reasonably you can follow their Disaster Recovery Guide and be back in business pretty quickly. In about as much time as it would take you to find the right tapes somehow and trawl them by hand using tar or dump/restore. I'm assured ADSM and Veritas and presumably everyone else as well has their own Way.
I will grant that the Veritas solution of interleaving streams does sound well thought out and probably more efficient, but from an admin's perspective, you have to figure some day the worst will happen.
AFAIK The interleaving thing was born of expediency, for all the backup product vendors: for backing up from slow individual clients to tapes that only work efficiently when streaming. Tape lifetimes are also reduced sharply when tapes are constantly stopped, rewound-a-bit and set to streaming again. The fact that most admins like to get their backups done in the same windows, and using the same policies, relying on the backup software to work at least some of it out for them, means many backups get scheduled for the same time anyway. On a more banal note, is it not tempting to set client-paralellism up a notch, it sounds sexy and you never know, it *might* help.
Reminds me of Microsoft's argument against solaris : "Sun is trying to resurrect the 1960s-era mainframe paradigm and return computing to old-style timesharing." (www.dot-truth.com) Amazing how marketing appeals to feelings, not empirical data.
Always the way. Often works too - like when vendors sell to managers who don't then employ their technical staff in product evaluation and selection cycles. Or worse, they do but it's all a shame exercise, which means that the time was wasted on top of the bogus decision.. I'm not bitter, really. 8)
Until the system with the backup-product fails....
Ah, well I refer to my point about having a good procedure for dealing with that.
IMHO it's more important for business continuity to have the whole backup service back than it is to get the first restore done - because Murphy will most likely have the most restores you ever saw be requested in that window, just after you decided that doing that one restore *then* getting the backup system going again would be the best policy.
Been there. Got bitten.
But that all said, your comments are appreciated and gives me food for thought. I guess it weighs in as one of the older Unix paradoxes - ease vs. utility (can't think of a better way to phrase that). Sure, who wouldn't want the easiest to use backup software? But from an admin perspective, we have to be able to pick up the pieces if it all explodes and put it back together ASAP.
Yup it's a balancing act - the two aren't necessarily mutually exclusive goals though.
-- End of excerpt from Jay Orr
On Tue 15 Feb, 2000, Steve Kappel steve.kappel@raistlin.min.ov.com wrote:
Interleaving (multiplexing is the term in NetBackup) is optional in NetBackup. You can turn it off for any class.
This is good to know - though I guess you have to be confident of the ability of a given client to supply data to keep the tapes fed.
NetBackup NDMP does NOT use multiplexing. Also note that for NDMP the tape format is determined by the NDMP server vendor.
NDMP uses dump format as well doesn't it? So all the propositions above get a little messed around here anyhoo.
-- End of excerpt from Steve Kappel
On Tue 15 Feb, 2000, "Eyal Traitel" eyal.traitel@motorola.com wrote:
Are there any plans / ideas from Veritas on improving backup/restore times of filers ?
Here's hoping.
It seems that the tape/backup s/w is not keeping up well with the filers' huge sizes ?
Oh the number of times I've bewailed this too.
What is the best configuration that can give us best performance ?
Anyone fancy a quick poll on how fast they can backup how much?
-- End of excerpt from "Eyal Traitel"
On Tue, 15 Feb 2000, mark wrote:
AFAIK The interleaving thing was born of expediency, for all the backup product vendors: for backing up from slow individual clients to tapes that only work efficiently when streaming.
What are people's experiences with non-streaming performance of various kind of tape drives? Do some mechanisms handle lack of streaming better than others? For example, I would think that a DLT7000 would suffer horribly if you don't feed it data fast enough because it takes a relatively long time to stop the tape, rewind and reposition, and bring the tape back up to speed. A helical-scan drive would be less affected, since the tape speed is very low (and can thus stop/start the tape very quickly). The tape speed difference is something on the order of 100-fold (about 4 m/s for a DLT7000, and 4 cm/s for a Mammoth, IIRC). Have people found helical-scan drives to be much less susceptible to streaming-related performance degradation?
Brian Tao wrote:
On Tue, 15 Feb 2000, mark wrote:
AFAIK The interleaving thing was born of expediency, for all the backup product vendors: for backing up from slow individual clients to tapes that only work efficiently when streaming.
What are people's experiences with non-streaming performance of
various kind of tape drives? Do some mechanisms handle lack of streaming better than others? For example, I would think that a DLT7000 would suffer horribly if you don't feed it data fast enough because it takes a relatively long time to stop the tape, rewind and reposition, and bring the tape back up to speed. A helical-scan drive would be less affected, since the tape speed is very low (and can thus stop/start the tape very quickly). The tape speed difference is something on the order of 100-fold (about 4 m/s for a DLT7000, and 4 cm/s for a Mammoth, IIRC). Have people found helical-scan drives to be much less susceptible to streaming-related performance degradation?
Prior to our current Networker/DLT7000 backup system we had an Epoch/Exabyte system and my feeling is that it was both more reliable and faster, of course we backed up considerably less data then.
On a related note, has anyone any comments on VXA tapes and drives, their web site (http://www.vxatape.com) makes a good argument for them but then again DLT sounded good before I started to use it.
/Michael
On Thu, 17 Feb 2000, Michael Salmon wrote:
On a related note, has anyone any comments on VXA tapes and drives, their web site (http://www.vxatape.com) makes a good argument for them but then again DLT sounded good before I started to use it.
Looks good on paper, but I don't know of anyone actually using that mechanism. I'll let someone else dive in before I risk my data with it.
On Tue, 15 Feb 2000, mark wrote:
AFAIK The interleaving thing was born of expediency, for all the backup product vendors: for backing up from slow individual clients to tapes that only work efficiently when streaming.
What are people's experiences with non-streaming performance of
various kind of tape drives? Do some mechanisms handle lack of streaming better than others? For example, I would think that a DLT7000 would suffer horribly if you don't feed it data fast enough because it takes a relatively long time to stop the tape, rewind and reposition, and bring the tape back up to speed.
It's all buffer dependent. Early 4000s didn't have enough buffer to do this. If you're draining the buffer, then you're not sending data fast enough. So the drive needs to be able to buffer data at tape speed for long enough to go through a stop,rewind,start cycle. I've never noticed any real penalty on a 7000. Their buffers seem to be large enough to not have the tape slow down the machine..
I am having a problem on one of my machines and I was wondering if anyone has encountered anything similar.
I have an F740 running 5.3.4R2, with a directly attached AIT-2 tape library. When I first tested the tape drive it performed wonderfully. Once I was happy with the 740 and with the tape drive, I moved the bulk of the data from a pair of old F230's over to the F740. The problem I then encountered is that when I am dumping to the tape it writes 50 gig and then aborts. I am using the "36C" AIT-2 tapes and I am accessing the tape drive using device "nrst0a", so I should be able to write approximately 72 gig of data to the tape. I have tried several different tapes and get the same results. I also tried setting the tape drive into DLT7000 emulation and I still get the same results.
The qeustion now is: is there something wrong with the tape drive or is this a problem with 5.3.4R2? Should I try upgrading to 5.3.4R3? I have another F740 with a DLT7000 library on it and it seems to work just fine. Did I blunder badly by switching to AIT-2?
Please help!!!!!
I am having a problem on one of my machines and I was wondering if anyone has encountered anything similar.
I have an F740 running 5.3.4R2, with a directly attached AIT-2 tape library. When I first tested the tape drive it performed wonderfully. Once I was happy with the 740 and with the tape drive, I moved the bulk of the data from a pair of old F230's over to the F740. The problem I then encountered is that when I am dumping to the tape it writes 50 gig and then aborts. I am using the "36C" AIT-2 tapes and I am accessing the tape drive using device "nrst0a", so I should be able to write approximately 72 gig of data to the tape. I have tried several different tapes and get the same results. I also tried setting the tape drive into DLT7000 emulation and I still get the same results.
The qeustion now is: is there something wrong with the tape drive or is this a problem with 5.3.4R2? Should I try upgrading to 5.3.4R3? I have another F740 with a DLT7000 library on it and it seems to work just fine. Did I blunder badly by switching to AIT-2?
Ah! Unless your data is very well formed (almost pure ascii) you're getting about what we expect on a drive. Remember that some files are already compressed (zip files, jpg files, some movie formats) you can't meaningfully compress them. In fact as far as we can tell a file system full of .gz files seems to grow when written to a compressing tape.
13.2 GB filesys writes 13.5 GB to the drive and uses 15.1 GB of tape. 13.6 GB filesys writes 13.8 GB to the drive and uses 15.4 GB of tape.
I'd allow 50 Gb per drive, you may actually get a little more that that, but running out of tape is a royal pain.
Please help!!!!!
-- David H. Brierley Raytheon Electronic Systems, Naval & Maritime Integrated Systems Engineering Technology, Operating Systems Support Group
----- Stephen C. Woods; UCLA SEASnet; 2567 Boelter hall; LA CA 90095; (310)-825-8614 Finger for public key scw@cirrus.seas.ucla.edu,Internet mail:scw@SEAS.UCLA.EDU
Arthur> It's all buffer dependent. Early 4000s didn't have enough Arthur> buffer to do this. If you're draining the buffer, then you're Arthur> not sending data fast enough. So the drive needs to be able Arthur> to buffer data at tape speed for long enough to go through a Arthur> stop,rewind,start cycle. I've never noticed any real penalty Arthur> on a 7000. Their buffers seem to be large enough to not have Arthur> the tape slow down the machine..
The early DLT 7000s also had this problem, since they came with a 4 MB buffer on the drive. At full speed data rates, this wasn't enough to keep the drive from throttling the sender in certain cases.
The current 7k drives all have 8Mb buffers, which let's them hold enough data if they have to re-wind and get back upto speed. That takes about 1.5 seconds to do.
My reference is the "DLT University Handbook".
John John Stoffel - Senior Unix Systems Administrator - Lucent Technologies stoffel@lucent.com - http://www.lucent.com - 978-952-7548 john.stoffel@ascend.com - http://www.ascend.com
John Stoffel writes:
The early DLT 7000s also had this problem, since they came with a 4 MB buffer on the drive. At full speed data rates, this wasn't enough to keep the drive from throttling the sender in certain cases.
The current 7k drives all have 8Mb buffers, which let's them hold enough data if they have to re-wind and get back upto speed. That takes about 1.5 seconds to do.
My reference is the "DLT University Handbook".
Any idea how to determine how large the buffers are?
The current 7k drives all have 8Mb buffers, which let's them hold enough data if they have to re-wind and get back upto speed. That takes about 1.5 seconds to do.
My reference is the "DLT University Handbook".
Luke> Any idea how to determine how large the buffers are?
I'm not sure, you'll probably have to go and beat on your vendor. There might be a SCSI PROM page on the drive which gives those specifics, but more likely just giving them the serial number of the drive should let them tell you what it's got.
John John Stoffel - Senior Unix Systems Administrator - Lucent Technologies stoffel@lucent.com - http://www.lucent.com - 978-952-7548 john.stoffel@ascend.com - http://www.ascend.com
John Stoffel writes:
The early DLT 7000s also had this problem, since they came with a 4 MB buffer on the drive. At full speed data rates, this wasn't enough to keep the drive from throttling the sender in certain cases.
The current 7k drives all have 8Mb buffers, which let's them hold enough data if they have to re-wind and get back upto speed. That takes about 1.5 seconds to do.
My reference is the "DLT University Handbook".
Any idea how to determine how large the buffers are?
From the Networker user archives (where I see that John Stoffel and I had a very similar exchange 8 months ago) :-)
- [...] Model numbers for the DLT7000 have the form - TH6xy-zz. If "y" in TH6xy-zz is either an A or B, it's a 4MB drive.