We use DLT7000, the compression devices on the netapp. Right now the master for these backups is a sun ultra 2. He really does not do much other than schedule jobs, perform database updates, etc. He has to create the list of files to backup and has to have a lot of temporary disk space and memory to handle this task. During the backups, most (all?) of the CPU cycles come from the filers -- all 760's.
Here is some info on our sun:
Sun Ultra 2 UPA/SBus (UltraSPARC-II 296MHz) with 384MB of memory. We have a 70GB raid array for /usr/openv to allow space for database growth. I thought he had more memory it may have been "borrowed" for an unexpected project.
We do stagger our backups so that all backups jobs are not going for resources at one time. Each filer volume is a class for a lot of reasons. Originally we tried to limit our volumes to a size that can be restored within a 4 hour window... in reality, it is probably closer to 6 hours on some volumes, and we have a couple of "legacy" volumes from pre-multi volume days that are about 150GB. We have tested restores and seem to get around 15-17gb/hour restoring data, depending on the data.
Kelly
--
Kelly Wyatt, Kelly.Wyatt@SAS.com Senior Systems Programmer Strategic Recovery SAS Institute Inc. / SAS Campus Drive / Cary, NC 27513 http://www.sas.com SAS... The Power to Know
-----Original Message----- From: Jay Orr [mailto:orrjl@stl.nexen.com] Sent: Tuesday, July 25, 2000 9:16 AM To: Kelly Wyatt Cc: 'Brian Hostetter'; toasters@mathworks.com Subject: RE: Ndmp vs. Nfs backups.
Thanks for the data, it is quite useful! What kind of storage medium are you using (i.e. DLT?, AIT?) and what is your backup hardware (i.e. what are the specs of the system you use to run the backup software on)?
This is the kind of info I can't even get the sales 'drioids to get for me...
----------- Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO