I need to move about 56GB of data from one F520 to another. Does anyone have experience using ndmpcopy for that? It looks like it might fit the bill, but I'd be interested in hearing of any good-bad-indifferent experiences with it, and particularly what rate it can transfer data from one filer to another.
On Tue, Jul 21, 1998 at 01:37:58PM -0700, Jim Davis wrote:
I need to move about 56GB of data from one F520 to another. Does anyone have experience using ndmpcopy for that? It looks like it might fit the bill, but I'd be interested in hearing of any good-bad-indifferent experiences with it, and particularly what rate it can transfer data from one filer to another.
Answer here will help me out as well...
Gotta move 20GB from a little F210 to a new box...
On Tue, 21 Jul 1998, Jim Davis wrote:
I need to move about 56GB of data from one F520 to another. Does anyone have experience using ndmpcopy for that? It looks like it might fit the bill, but I'd be interested in hearing of any good-bad-indifferent experiences with it, and particularly what rate it can transfer data from one filer to another.
A few weeks ago I moved a similar amount from a 230 to a 630, and it worked beautifully. On small segments, it even supports level based dumps, so if the tree you're copying is live, you can freshen with newer data. On large file systems (we had trouble with anything over a few gig) the level 1 tends to reboot the source filer. Level 0's are always fine.
ymmv.
On Tue, Jul 21, 1998 at 01:37:58PM -0700, Jim Davis wrote:
I need to move about 56GB of data from one F520 to another. Does anyone have experience using ndmpcopy for that? It looks like it might fit the bill, but I'd be interested in hearing of any good-bad-indifferent experiences with it, and particularly what rate it can transfer data from one filer to another.
Well I have but 4GB to move and am having a terrible time...
mrz@nimba [~myl/ndmpcopy/src/] 306> ./ndmpcopy rogue:/satools miela:/vol/vol0/satools -sa root:something -da root:something Connecting to rogue. Connecting to miela. rogue: CONNECT: Connection established. miela: CONNECT: Connection established. rogue: LOG: DUMP: creating "snapshot_for_dump.28" snapshot. rogue: LOG: DUMP: Date of this level 0 dump: Tue Jul 21 16:35:58 1998 rogue: LOG: DUMP: Date of last level 0 dump: the epoch rogue: LOG: DUMP: Dumping /satools/ to NDMP connection rogue: LOG: DUMP: mapping (Pass I) [regular files] rogue: LOG: DUMP: mapping (Pass II) [directories] rogue: LOG: DUMP: estimated 3012888 tape blocks. rogue: LOG: DUMP: dumping (Pass III) [directories] rogue: LOG: DUMP: dumping (Pass IV) [regular files] rogue: LOG: DUMP: 8% done, finished in 0:55 rogue: LOG: DUMP: 17% done, finished in 0:47 rogue: LOG: DUMP: 26% done, finished in 0:42 rogue: LOG: DUMP: 34% done, finished in 0:39
Which is all nice and stuff, but:
sunflower# pwd /net/miela/vol/vol0 sunflower# ls -l satools/ total 0 sunflower# ls cswtools/ dtools/ etc/ home/ nmtools/ satools/ swinfo/ sunflower# ls -R cswtools/ dtools/ home/ nmtools/ satools/ swinfo/ cswtools/:
dtools/:
home/:
nmtools/:
satools/:
swinfo/:
WHERE'S MY @#(*&& DATA?!?!?
- mz
Bah. 83% into things, my directories and files appeared. Guess I was looking for a bit more feedback than a df showing my disk space going away.
- mz
On Tue, Jul 21, 1998 at 05:01:12PM -0700, matthew zeier wrote:
On Tue, Jul 21, 1998 at 01:37:58PM -0700, Jim Davis wrote:
I need to move about 56GB of data from one F520 to another. Does anyone have experience using ndmpcopy for that? It looks like it might fit the bill, but I'd be interested in hearing of any good-bad-indifferent experiences with it, and particularly what rate it can transfer data from one filer to another.
Well I have but 4GB to move and am having a terrible time...
mrz@nimba [~myl/ndmpcopy/src/] 306> ./ndmpcopy rogue:/satools miela:/vol/vol0/satools -sa root:something -da root:something Connecting to rogue. Connecting to miela. rogue: CONNECT: Connection established. miela: CONNECT: Connection established. rogue: LOG: DUMP: creating "snapshot_for_dump.28" snapshot. rogue: LOG: DUMP: Date of this level 0 dump: Tue Jul 21 16:35:58 1998 rogue: LOG: DUMP: Date of last level 0 dump: the epoch rogue: LOG: DUMP: Dumping /satools/ to NDMP connection rogue: LOG: DUMP: mapping (Pass I) [regular files] rogue: LOG: DUMP: mapping (Pass II) [directories] rogue: LOG: DUMP: estimated 3012888 tape blocks. rogue: LOG: DUMP: dumping (Pass III) [directories] rogue: LOG: DUMP: dumping (Pass IV) [regular files] rogue: LOG: DUMP: 8% done, finished in 0:55 rogue: LOG: DUMP: 17% done, finished in 0:47 rogue: LOG: DUMP: 26% done, finished in 0:42 rogue: LOG: DUMP: 34% done, finished in 0:39
Which is all nice and stuff, but:
sunflower# pwd /net/miela/vol/vol0 sunflower# ls -l satools/ total 0 sunflower# ls cswtools/ dtools/ etc/ home/ nmtools/ satools/ swinfo/ sunflower# ls -R cswtools/ dtools/ home/ nmtools/ satools/ swinfo/ cswtools/:
dtools/:
home/:
nmtools/:
satools/:
swinfo/:
WHERE'S MY @#(*&& DATA?!?!?
- mz
-- matthew zeier -- mrz@3com.com -- 3Com EWD Engineering -- 408/326-8420 ...................................................................... "Y el mundo se mueve, mas rapido y mejor." - Fey
On Tue, 21 Jul 1998, matthew zeier wrote:
Bah. 83% into things, my directories and files appeared. Guess I was looking for a bit more feedback than a df showing my disk space going away.
Actually, you've stumbled across a bug/feature that I can't get an answer out of NetApp on. When doing a restore (which is what the destination filer sees this as) you don't see the data you are restoring until one of two things happen.
A. It finished.
B. You unmount, and remount the filesystem.
Bizarre? Yes. Explainable...I'm sure.
This doesn't concern me as much in this scenario as it does for the day I restore my 200 gig of data onto a filer, and can't see any of the data until the restore is done. From a disaster recovery standpoint, I'd assumed that a restore would get some file structures in place right away, and things would continue to build in the background while I went on with other things.
Looks like that won't happen.
On Tue, Jul 21, 1998 at 09:56:28PM -0400, Matt Stein wrote:
On Tue, 21 Jul 1998, matthew zeier wrote:
This doesn't concern me as much in this scenario as it does for the day I restore my 200 gig of data onto a filer, and can't see any of the data until the restore is done. From a disaster recovery standpoint, I'd assumed that a restore would get some file structures in place right away, and things would continue to build in the background while I went on with other things.
Looks like that won't happen.
Yep. Aside from my 'df' output I had no feedback that this was actually working. Since this was a non-critical move, I just let it run itself out. But without any real feedback I might have just killed it, figuring it wasn't working.
- mz
On Tue, 21 Jul 1998, matthew zeier wrote:
Bah. 83% into things, my directories and files appeared. Guess I was looking for a bit more feedback than a df showing my disk space going away.
Actually, you've stumbled across a bug/feature that I can't get an answer out of NetApp on. When doing a restore (which is what the destination filer sees this as) you don't see the data you are restoring until one of two things happen.
A. It finished.
B. You unmount, and remount the filesystem.
Well, here's a Netapp answer. I've seen this behavior. But, by no means does it happen every time. Furthermore, the Netapp box _knows_ the data is there because I see the result of 'df' and 'df -i' (on the box) changing.
I also know the system finds the files if I do a dump of the directory being restored. By that I mean that dump sees the files. (No, this is not useful, but it's how I have fun).
So, the filer sure knows about these things. Also, when this happens, and I mount the filer on a different machine, everything shows up.
I don't want to point fingers at the NFS client, necessarily... but maybe some interaction?
Bizarre? Yes. Explainable...I'm sure.
This doesn't concern me as much in this scenario as it does for the day I restore my 200 gig of data onto a filer, and can't see any of the data until the restore is done. From a disaster recovery standpoint, I'd assumed that a restore would get some file structures in place right away, and things would continue to build in the background while I went on with other things.
I agree that there is a shortcoming. Restore should let you know what's happening.
For now, though, you could always do a "sysstat" on the filer console, and see that the filer is doing something. Of course, that's better for tape than NDMPcopy. After all, if you see the tape #'s doing something, you can say - "Yeah, that's restore." Not so easy when the data is coming in off the net...
Also, as a caveat, there are stages of restore where sysstat shows no tape activity. So, my idea isn't 100% foolproof.
So, my last comment would be : Restore does an awful lot of complaining when it isn't happy. So if it isn't saying anything, you're doing OK...
Stephen Manley FS Recovery Engineer
On Wed, 22 Jul 1998, Stephen Manley wrote:
I've seen this behavior. But, by no means does it happen every time. Furthermore, the Netapp box _knows_ the data is there because I see the result of 'df' and 'df -i' (on the box) changing.
Fair enough. I've ndmpcopied a tonne of stuff, and this behaviour has never looked any different. For me it happens every time ...but that's just me.
As for the netapp knowing the data's there, yes, absolutely. If I mount on another machine, the data is definately there.
I don't want to point fingers at the NFS client, necessarily... but maybe some interaction?
FreeBSD, HPUX, and Solaris checked so far. Remounting causes the data to show up, as does the completion of the backup. That doesn't sound like the client to me. What does the netapp do at the end of the restore that would otherwise happen when remounting?
On Wed, 22 Jul 1998, Stephen Manley wrote:
For now, though, you could always do a "sysstat" on the filer console, and see that the filer is doing something. Of course, that's better for tape than NDMPcopy. After all, if you see the tape #'s doing something, you can say - "Yeah, that's restore." Not so easy when the data is coming in off the net...
Sorry, I should have been more clear. My filer contains a tree of data that a web server serves up to the net. In a mass restore situation, I want to put the web server up right away on that small amount of data, and let the rest fill in in the background.
The idea here was that once we kicked off the restore, some data would be available right away, rendering the system 'up'. Proper balancing and placement of files on the filesystem would even ensure that the most important data came back first.
+--- In a previous state of mind, matthew zeier mrz@3com.com wrote: | | Well I have but 4GB to move and am having a terrible time...
I had meant to send email earlier today about all this ndmpcopy stuff. Since I last touched it several months ago, the deatils of my experience are a bit sketchy.
The toasters archive should have all the details of my journey (it was not all that successful).
rsync may be quicker.
Alexei
On Tue, Jul 21, 1998 at 10:02:15PM -0400, Alexei Rodriguez wrote:
+--- In a previous state of mind, matthew zeier mrz@3com.com wrote: | | Well I have but 4GB to move and am having a terrible time...
I had meant to send email earlier today about all this ndmpcopy stuff. Since I last touched it several months ago, the deatils of my experience are a bit sketchy.
The toasters archive should have all the details of my journey (it was not all that successful).
rsync may be quicker.
My machine was dying under a tar cuz I had to yank 4GB through my 10Mbps pipe. ndmpcopy took about 30 minutes from filer -> filer over their 100Mbps connections.
- mz
I was able to use ndmpcopy to dump /home from F520-old to F520-new in about 6 hours. (Not too bad.) I'm not quite sure this makes sense, but can .snapshot be ndmpcopied too?
Naively trying ndmpcopy oldfiler:/.snapshot newfiler:/vol/vol0/.snapshot generated a
subtree path must refer to a specific snapshot name in ".snapshot".
error message. But trying weekly.1, specifically, then generated a
newfiler: /vol/vol0/.snapshot/weekly.1 - cannot create directory: Read-only file system
error message (after spending 36 minutes mapping files and directories).