We killed those backups. After review, we determined if we had to, we could request and reconstitute the data from another source. We ended up just backing up our intellectual property which took minutes and only a few tapes....until we retired an older system. Then we made it the "backup" and snapmirrored the important stuff to it...got rid of tapes!
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Mon, May 15, 2017 at 8:51 PM, John Stoffel john@stoffel.org wrote:
Tim> Yepper. On volumes with lots of files, that ndmp history/catalog Tim> pass can be crazy!
Hah! We've got a volume with 50 million files and it sucks for NDMP backups. Fulls are actually not terrible... but the index pass just takes forever.
Tim> I once tried to use ndmp backup on a volume with 500,000+ Tim> files. After 8 hours and a huge load (FAS6080) the ndmp gave out Tim> and quit. Eight hours of building the index and it didn't finish.
This is the probably with *any* file level backup. Unfortunately, there's no good solution in terms of price for backing up Netapps that I'm aware of.
You could buy a cheaper pair of heads with lots and lots of cheap(ish) SATA storage, but it's still god-awful expensive. And the snapmirror licenses aren't cheap either.
In my engineering environments, I've been trying to encourage them to only backup what they need, and to work in scratch areas instead, but it's hard to get people to change.
The Netapp is just so reliable that is really does keep data for years and years without problems. In more cases.
Heh.
But some $WORK orgs have off-site requirements, and they're not willing to pay the price for a remote co-lo Netapp and storage and the bandwidth to make backups work reliably and quickly enough. So off to tape it is. Make's JSOX happy too...