It's the amount of data that needs to be transmitted, mostly... Of course, you'll also have to 'revisit' deduped blocks during this file-level transfer, but chances are, they're still in cache.
Sebastian
On 21.12.2012 15:37, Scott Eno wrote:
Thanks for the response.
This may be a silly question, but where/what is the bottleneck in the re-hydration process? The CPU on the controller? The disks?
I don't really see extra CPU activity that matches the time of the dump.
On Dec 21, 2012, at 9:30 AM, Fred Grieco <fredgrieco@yahoo.com mailto:fredgrieco@yahoo.com> wrote:
Yes. Dedupe is at the block level, but NDMP is a file level backup. So an NDMP-based backup is backing up "hydrated" file data. A block level backup (really only snapvault/snapmirror are available for Netapp) would be faster. But then again, there are efficiencies with compression during most backups...
*From:* Scott Eno <s.eno@me.com mailto:s.eno@me.com> *To:* Toasters <toasters@teaparty.net mailto:toasters@teaparty.net> *Sent:* Friday, December 21, 2012 9:12 AM *Subject:* question about NDMP dumps
In investigating why an NDMP dump over 10GbE isn't going as fast as it seems it should, a question arose. Does the data being dumped via netbackup to a data domain device, 1.58TB deduped down to 813GB, have to get re-hydrated as the dump proceeds? And, if so, would that impact the speed of the dump? _______________________________________________ Toasters mailing list Toasters@teaparty.net mailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters