I've got a f230 with 4 shelves of 4GB disks and its replacement, an f740. I talked to netapp support and they recommended that the best method to move the data from the f230 to the f740 would be to tapes with dump and restore. I can't believe that there isn't a more efficient way to do this. We are an ISP and all mail/ftp/http is served off of the netapp, any downtime will be painfull so we'd like to minimize it as best we can.
What choices to migrate the data do we have? We've considered tar/cpio over nfs but that's going to take some time even if we migrate one service at a time. dump/restore seems clumsy. and plain ole 'cp' would take forever. Apparently there are issues with volcopy going from SCSI to FCAL, is that true?
On Wed, 24 May 2000, Kelsey Cummings wrote:
I've got a f230 with 4 shelves of 4GB disks and its replacement, an f740. I talked to netapp support and they recommended that the best method to move the data from the f230 to the f740 would be to tapes with dump and restore. I can't believe that there isn't a more efficient way
I just went thru this 6 months ago with a similar setup, an F230 with 2 shelves of 9GB drives moving to an F740. What I ended up doing was to take a level 0 dump of the original machine and load it onto the new one. I did this across the network so that I didn't have to worry about changing tapes. Once the level 0 dump/restore was finished I kicked everybody off of the old machine (shut down nfs and unexport everything) and ran a level 1 dump/restore over to the new machine. As soon as the level 1 finished I completely shut down both machines, renamed the new one to have the same name as the original and rebooted it. As far as the users were concerned, total down time was less than an hour. However, the entire operation took most of a weekend, primarily because of doing the dump/restore across the network. If you have compatible tape drives on both filers you can probably speed up the processing of the level 0 dump/restore by using tapes. You probably will also be able to move your data a little faster than I did because you probably don't have as much data on your 230 as I had on mine. I had over 95GB of active data on the thing which if you do the math you will realize means that I had 13 active data drives, no hot spare, and no snapshot area. In fact the filer was so full that the first time I attempted to run the dump it aborted because there was not enough space left to create the mapping files that dump uses.
On Wed, 24 May 2000, Kelsey Cummings wrote:
What choices to migrate the data do we have? We've considered tar/cpio over nfs but that's going to take some time even if we migrate one service at a time.
I had to do exactly that just a couple of weeks ago, from two F230's with two shelves of 4GB drives (about 35GB of data on each) to two F740's. It helps if you have multiple quota trees and exports, which allows you to move one filesystem at a time, rather than an entire filer at a time.
I have a script that does an initial ndmpcopy from the F230 filesystem to populate the F740. Then the script goes into an endless loop updating the target filesystem with rsync (http://rsync.samba.org/) until I signal it to stop. This is where dividing up the work into filesystems helps a lot, otherwise rsync would have spent hours building up its file tables for an entire filer. The F230's were providing mail spool storage, so I definitely wanted to suspend service for a couple of hours during our maintenance window so I could do a read-only export, and issue a final rsync to ensure both old and new filesystems were exactly identical.
The work was split over two weekends, with some filesystems mounted from the F740's and some from the F230's during the transition. I would have used incremental dump/restore via ndmpcopy, but I don't believe that accounts for files deleted off the source filesystem, whereas rsync does. rsync has a lot of options to play with. The ones I used are:
rsync --archive --delete --exclude ".snapshot/" --links --recursive --stats --verbose
I deliberately left out --update to ensure all files are copied over, not trusting file timestamps on the source.
The work was split over two weekends, with some filesystems
mounted from the F740's and some from the F230's during the transition. I would have used incremental dump/restore via ndmpcopy, but I don't believe that accounts for files deleted off the source filesystem, whereas rsync does.
Incremental NDMPcopies will track file renames and deletes. The destination file system will look EXACTLY like the source file system at the time of the last incremental copy.
Sm
All of our machines are on a three year lease. That means that I must migrate my data every three years. I used the following type command to migrate data directories. I have cut this information from my documentation. In this case, I am attempting to move everything called "maverick". Average speed is about 18GB/hr across 100Mb connection and a little less if you are 10Mb only. My migrations were from an F330, 10Mb connectivity only, to F760, Quad 100Mb connections.
13.) Begin data migration. # cd /mounts/toaster/toast14/home/db # for dn in `ls -d maverick*`
do rsh toast14 "dump 0fb - 63 /home/db/$dn" | rsh halifax-c "restore rfD -
/vol/vol2/$dn"
sleep 60 echo "."; echo "."; echo "."; echo "."; echo "." echo $dn done
DUMP: creating "snapshot_for_dump.1" snapshot. creating...RESTORE: No terminal available for input; using null device ........................ DUMP: Date of this level 0 dump: Mon Nov 9 17:18:24 1998 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /home/db/maverick to standard output DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 2110299 tape blocks. DUMP: Dumping volume 1 on standard output DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: 16% done, finished in 0:26 DUMP: 38% done, finished in 0:16 DUMP: 60% done, finished in 0:09 DUMP: 84% done, finished in 0:03 DUMP: 2016247 tape blocks
a.) change to the source directory. b.) create a list of directories to move and use that as input to a for loop. c.) remsh to the source filer and dump to stdout. Pipe that into another remsh to the destination filer and restore from stdin. d.) this is just and echo of the directory name. e.) add a sleep statement to allow the snapshot time to delete. f.) close the for loop.
14.) Edit the auto.db file to direct the automount point to the new location.
maverick halifax-c:/vol/vol2/maverick maverickSim halifax-c:/vol/vol2/maverickSim
Kelsey Cummings wrote:
I've got a f230 with 4 shelves of 4GB disks and its replacement, an f740. I talked to netapp support and they recommended that the best method to move the data from the f230 to the f740 would be to tapes with dump and restore. I can't believe that there isn't a more efficient way to do this. We are an ISP and all mail/ftp/http is served off of the netapp, any downtime will be painfull so we'd like to minimize it as best we can.
What choices to migrate the data do we have? We've considered tar/cpio over nfs but that's going to take some time even if we migrate one service at a time. dump/restore seems clumsy. and plain ole 'cp' would take forever. Apparently there are issues with volcopy going from SCSI to FCAL, is that true?
-- Kelsey Cummings - kgc@sonic.net sonic.net System Administrator 300 B Street, Ste 101 707.522.1000 (Voice) Santa Rosa, CA 95404 707.547.2199 (Fax) http://www.sonic.net/ Fingerprint = 7F 59 43 1B 44 8A 0D 57 91 08 73 73 7A 48 90 C5
-- --------------------------------------------------------------- G D Geen mailto:geen@ti.com Texas Instruments Phone : (214)480.7896 System Administrator FAX : (214)480.7676 --------------------------------------------------------------- Life is what happens while you're busy making other plans. -J. Lennon