When I need to this kind of copying, I use "rdist".
Simply mount both servers on the same host, and have rdist copy to the localhost. (The transfer host must have root priviledges on both filer filesystems.
mkdir /a /b mount fa:/ /a mount fb:/ /b rdist -R -c /a localhost:/b
Keep re-running rdist until you are ready to do the final sync.
Of course, this only moves the files. If you are dealing with access control lists, or non-unix filesystems, this won't capture everything. Another thing you may wish to consider, is that the simple rdist command above will also transfer the /etc directory, which is probably NOT what you want. Consider creating a Distfile breaking things up into managable chunks.
Anthony Fiarito wrote:
Here is the scenario:
I have two identically configured F630 filers. Filer A is active and contains about 150 gigs of data (all in one volume). Filer B is brand new and empty. I need to replicate the data from Filer A to Filer B with the minimal amount of downtime for Filer A. I could do a vol copy to move the data, but in the time that it would take to complete, much of the data would have changed and would be out of date. I was thinking of some sort of solution where I could do a vol copy (or something similar) of the data from Filer A to Filer B while Filer A was up and running. Once that was done, I should shut down Filer A and resync the changes that were made to Filer A during the copy to Filer B (thus giving me an exact copy of the data on each netapp). It would be similar to doing a level 0 dump while everything was active, and then a level 1 while they were shut down. The problem though is that I need them to be synched exactly (i.e. files/dirs that get deleted from Filer A during the initial copy would also get deleted from Filer B during the sync phase). Another option might be to do an rdist from Filer A to Filer B of the data while they are active, isolate them from the network once that is done, and re-run rdist to sync of any changes that took place during the initial move. With 150 gigs of data though, this would probably take a ReallyLongTime[tm], thus not being a viable solution.
Has anyone tackled a situation like this before? What methods did you use and in what sort of time frame were you able perform the replication?
Thanks in advance for any feedback.
-alf
Anthony Fiarito alf@cp.net Critical Path Operations