John, We have used lrep repeatedly to push out a snapmirror copy of our tools repository (2T and growing) to 8 sites so far and more on the way. Not using it for SV but the premise is the same.
We break it up and then compress and write the files to 400G PC SATA drives. We ship the drives to the new location, copy and uncompress the files back up to a contiguous area and then use the writer to rebuild it. We have not shipped those files across the WAN.... would take too long. We are generally out of sync for 3 weeks by the time it is all said and done and the resync takes us several days. Still, it is better than the estimated months of time it was going to take us to do this across the WAN.
As I understand it you cannot use the lrep r/w more than once unless they have come up with new tools. Also the lrep cannot die in the middle or you have to start it over (unlike the restart of an init of a snapmirror).
As to the figuring out how much data you have to move, that is the manual snap delta calculations as far as I know. C-
John Stoffel wrote:
Hi all,
Does anyone else here do SnapVault across the WAN across a long wide link (dual T3, 90+ msec delay) and get decent performance, esp for the initial copy of the volume(s) and/or qtree(s)?
We've gone the route where I did an lrep_reader dump of a 3.8tb qtree (don't ask...) to local disks. Then I used a nice tool called 'bbcp' to push it all across the WAN so that I could actually *use* all my bandwidth.
Regular SnapVault sucks for performance, it just can't push more than 15mb/s which stinks when you have dual T-3 (90mb/s) bandwidth. Using bbcp I can fill that pipe for days on end.
So then once it's all across the WAN, I then used lrep_writer to dump the data to it's destination qtree. All fine and good, but then I have to start a regular snapvault to now catchup with all the data written during the 8+ days the first stage took. Sigh...
Anyone know if I can lrep_reader/lrep_writer the next snapvault snapshot across as well, so I can push the data faster? Because each time the link goes down between the sites, I have to start all over again with the catchup snapvault transfer and it's killing me.
Also, I'd love to be able to *know* how much data is left to copy from the source to the destination, but "snapvault status -l ...." doesn't give that sort of information. Or do I need to start using the 'snap delta' command and my own math to figure out things?
Thanks, John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087