Hi all,
Does anyone else here do SnapVault across the WAN across a long wide link (dual T3, 90+ msec delay) and get decent performance, esp for the initial copy of the volume(s) and/or qtree(s)?
We've gone the route where I did an lrep_reader dump of a 3.8tb qtree (don't ask...) to local disks. Then I used a nice tool called 'bbcp' to push it all across the WAN so that I could actually *use* all my bandwidth.
Regular SnapVault sucks for performance, it just can't push more than 15mb/s which stinks when you have dual T-3 (90mb/s) bandwidth. Using bbcp I can fill that pipe for days on end.
So then once it's all across the WAN, I then used lrep_writer to dump the data to it's destination qtree. All fine and good, but then I have to start a regular snapvault to now catchup with all the data written during the 8+ days the first stage took. Sigh...
Anyone know if I can lrep_reader/lrep_writer the next snapvault snapshot across as well, so I can push the data faster? Because each time the link goes down between the sites, I have to start all over again with the catchup snapvault transfer and it's killing me.
Also, I'd love to be able to *know* how much data is left to copy from the source to the destination, but "snapvault status -l ...." doesn't give that sort of information. Or do I need to start using the 'snap delta' command and my own math to figure out things?
Thanks, John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087