Hi Ben,
Tape, to me, seems the more prudent solution, because it allows for offsite of the data (in case of fire, WW3, etc) and we can maintain multiple independent copies of the data to guard against logical corruption of the data, if it occured. If logical corruption occurs we need to catch the problem before the snap to the NearStore happens or we're screwed, right?
Have you considered SnapVault?
http://www.netapp.com/products/filer/snapvault.html
It's a sibling to SnapMirror and provides online backups normally resident on the NearStore. You could SnapVault from either filer (production filer or SnapMirror destination). It allows you to have onsite or offsite backup versions online in a read- able (non-proprietry) format so single file restores can be performed quickly and easily either by mounting the NearStore via NFS and/ or CIFS. Or by using one of many SnapVault compliant (its controlled by a NDMP V4 extension) applications that are on the market today.
Not that I want to put you off tape, but we're seeing a strong movement towards disk-based backup and using tape for archival. SnapVault allows you to setup a retention policy on your backups just like you'd do within you tape library and the best bit is that after you do a baseline or level 0 backup you only need to perform incrementals forever after that. And these can be run every hour, only send changed data blocks over the network and therefore only save changed blocks per backup on the NearStore. Since each backup version is a NetApp Snapshot and you can have 250 Snapshots per volume, every backup is a full so no more incrementals to tape and no more incremental restores are required.
Customers who've deployed SnapVault have often upgraded their tape drive & tape library to the latest tape drive technology such as the drives you suggest LTO-2, 9940B and S-AIT, but along with SnapVault they have been able to make a smaller investment in their tape infrastructure and tape media.
I for one am not scared of S-AIT, but my previous position was at Ampex and we invented helical scan!
SnapVault in the form of an Open Systems SnapVault agent (that is also controlled via the same NDMP V4 extension) also allows the NearStore to be a online backup for Windows (NT, W2K & 2003), Solaris, AIX, HP-UX, Linux and SGI IRIX.
Sorry if this comes over too much of a plug, but I'd rather have you examine your other options to tape as backup. SnapVault (just like tape) would allow you to maintain multiple independent copies of the data to guard against logical corruption of the data, if it occurred.
Cheers, Grant
For information on NetApp's Data Protection Solutions view http://www.netapp.com/solutions/data_protection.html.
=========== grant@netapp.com ========== Grant Melvin === === Data Protection Specialist === === Network Appliance === |\ | __ ___ /\ __ __ === 475 East Java Drive === | \ | |__ | /__\ |__| |__| === Sunnyvale === | | |__ | / | | (R) === California === === 94089, USA === Network Appliance === Tel:(408)822-6761 ====== The evolution of storage.(tm) ===== Fax:(408)822-4611
-----Original Message----- From: Ben Rockwood [mailto:BRockwood@homestead-inc.com] Sent: Thursday, April 15, 2004 2:53 PM To: toasters@mathworks.com Subject: NetApp DR Tape Solutions
Hello Toasters.
I've got 2 940 Filers with 5TB of capacity each currently, with the expectation that it will grow significantly over time. We're using snapmirror from one filer to the other in order to provide redunancy and this also servers as our "backup". I think it's time to move to a more substantial solution, namely for DR purposes. The options, due to budget, are NearStore or tape. Obviously the best solution would be to snapmirror to a NearStore and then to do tape backups from the spinning copy, but we can't afford that sort of solution right now.
Tape, to me, seems the more prudent solution, because it allows for offsite of the data (in case of fire, WW3, etc) and we can maintain multiple independant copies of the data to guard against logical corruption of the data, if it occured. If logical corruption occurs we need to catch the problem before the snap to the NearStore happens or we're screwed, right?
So tape has the advantage given our constraints. So the question becomes which media to use. 9940B is out immediately due to cost ($40K/drive is insane). Which leaves LTO Gen2 at 200G native and S-AIT at 500G Native. Naturally I'm liking the S-AIT option because I can do offsites with a signficantly lower media count, despite the fact that the media cost is higher. And, because my data is primarily web content (HTML flattext and GIF/JPG images) I should be a pretty decent compression ratio.
This brings me to the big question. Has anyone managing Filers used S-AIT yet? It's really an argument between Helical Scan tape technology (wear rate, failures, etc) vs the stable and proven Linear technology of LTO Gen2. Despite the drawbacks associated with all the problems inhearent to Helical Scan, the 500G native, and possible 2.66:1 (I expect that with my data 2:1 is adiquate) compression ration the advantages to S-AIT for Filer backup is really hard to ignore.
My estimates right now in the pro-con area between the 2 formats looks like this:
Total Capacity of each Filer as currently configured: 5.2TB
Media Consumption per Filer: LTO Gen2: Backup of Full Capacity without Compression: 26 Tapes (approx) S-AIT: Backup of Full Capacity without Compression: 10 Tapes (approx)
So assuming I get around 2:1 compression which is reasonable, I can do full capacity backups in 5 tapes per filer with S-AIT vs 13 with LTO! That means if I want to keep 4 complete backups at any given time, I need a library with an insane media count, because we're looking at 26x4x2=208 slots, and bring that down to 130 or so with compression. With S-AIT I need less than 50 not considering compression. And to that, with compression I might be able to cram all 5TB onto 4 tapes, which means I can buy a 4 drive library and stream to all 4 tapes similatiously without a media change in the middle.
Anyway, obviously I'm feeling scared about S-AIT but cannot ignore the awsome potential it has for bulk data.
Can anyone please share their insights or opinions? Is anyone currently using S-AIT for filers?
Thanx.
benr.