Ever since late 6.4 releases of ndmpcopy, the qtree is recreated just like the source. I have done many data migrations this way and it works like that Volcopy also uses the loopback interface and if you have wide/deep directories, it is significantly faster than any file copy operation including ndmpcopy. Also, if you are on the same filer you do not need -sa and -da. If you are on the source filer, you do not need -sa and if you are on the destination filer you do not need -da --tmac ====================== Tim McCarthy Professional Services/Systems Engineer NetApp Federal Systems, Inc. 410-551-3970 (o) 443-363-0208 (f) tmac@netapp.com tmac-pager@netapp.com ====================== ________________________________________ From: Stephane Bentebba [mailto:stephane.bentebba@fps.fr] Sent: Tuesday, January 18, 2005 11:52 AM To: toasters@mathworks.com Subject: Re: vol copy i would advise you to use the ndmpcopy command it enable ndmpcopy without passing throught the netork if it is a local copy you have to note that ndmpcopy doesn't remeber the "qtree state" of a directory, so you have to create the qtree before copying into it for exemple : filer> qtree create /vol/vol1/myqtree filer> ndmpcopy -sa root:mypass -da root:mypass -st text -dt text -l0 /vol/vol0/myqtree/* /vol/vol1/ with [-l { 0 | 1 | 2 }] you can make a full, then incremental at 2 different level if you like we manange some data migration (from old SCSI technologie to newer FC one) this way devnull@adc.idt.com wrote: What would be really cool is if you would let us copy/move qtrees between volumes. I find myself having to do that more often that i copy entire volumes. Right now, i end up doing it over the network which kinda seems unnecessary. Thanks On Thu, 6 Jan 2005, Jerry wrote: Date: Thu, 6 Jan 2005 07:14:10 -0800 (PST) From: Jerry <juanino@yahoo.com> To: "McCarthy, Tim" <timothy.mccarthy@netapp.com>, Paul Galjan <galjan@gmail.com>, Ben Rockwood <brockwood@homestead-inc.com> Cc: toasters@mathworks.com Subject: vol copy Whoa. That's really cool. --- "McCarthy, Tim" <timothy.mccarthy@netapp.com> wrote: How "wide" are your directories? They play a very important role in the backup process. The more files in any given directory, the longer it takes to parse. How about this: VOLCOPY This only works at a volume level, it is all or none and no incrementals available...without a snapmirror license. You could do this: Source=vol1 Dest=dvol >From the Dest: vol restrict dvol from the source: vol copy -S start vol1 dest:dvol (-->that is a cap -S) this will copy the source volume, including all snapshots to the destination volume. (if you do not want the snapshots, drop the -S) The catch: You must copy the whole volume and you must use a whole volume on the destination to copy to. Any information on the destination volume will be eradicated. This will work in 6.x and 7.x although in 7.x you can only copy between like volumes. Flex to Flex and Traditional to Traditional. --tmac -----Original Message----- From: Paul Galjan [mailto:galjan@gmail.com] Sent: Wednesday, January 05, 2005 10:32 PM To: Ben Rockwood Cc: toasters@mathworks.com Subject: Re: NDMP Tuning Hey, you've always got cygwin ;-). Seriously, though. 2.3M files is a serious number of files. I used to have a 180G home directory partition for about 300 users with only about half that number of inodes. Even with rsync, it took about 4 hours to move that guy over to the destination, even when less tha n 500 MB had changed. It underscores the point that block level replication will have better performance than file level replication, whether you end up using QSM or ndmpcopy. It really is worth it to bang on your rep to get snapmirror for an amount that you can afford. I almost guarantee it would save you orders of magnitude in replication time. --paul On Wed, 05 Jan 2005 18:19:48 -0800, Ben Rockwood <brockwood@homestead-inc.com> wrote: Paul Galjan wrote: Cool then. In that case I would look at rsync and/or robocopy (in a windows only env). Not that rsync is a block level protocol (it evaluates on the file level), but perhaps it would provide better performance with smaller backup windows? Rsync is certainly a possiblity. I'm afraid I'd have some problems being as in this enviroment the filers are being used CIFS only, which makes file level interaction for an old UNIX zealot like me less than entertaining. To put a better point on it: NDMP is just a wrapper around the UNIX dump command. It's no better, nor worse than it, and that's the reason I asked. The dump command (and NDMP by extension) is for backup, not DR. It is a clunky protocol in terms of straight replication, and that's why Netapp and others offer alternatives for replication... In the end though, we should get to your problem: how many inodes are we looking at? And what happens in Pass 4, Stage 1? The inode number would be my first suspect. Right. I haven't looked at the code itself to see exactly what it's doing (I probly should at some point) but Stage 1 of Pass IV seems to be all about inode creation prior to copying in all the data. The source volume has 2.3million inodes in use. That does't seem like an outragious number, and this is a pretty small filer all things considered. How creation of 2.3 million inodes can consume 3 hours is beyond my understanding. During that time the destination filers CPU is nearly idle. The only explanation I can dream up is that the proccess of creating inodes is happening so quickly that the bulk of system time is spent in context switches not in execution, and hense a false sense of idle-ness... but thats a pretty BS explanation since even if that were the case it still wouldn't take 3 hours. All the evidence I've seen thus far with NDMP suggests that I just need to turn up the flow. Is there an idle loop in the dump code? Thats exactly what it feels like. Anyone know is there is an OnTap equivelent to truss? benr. --paul On Wed, 5 Jan 2005 17:12:25 -0800, Ben Rockwood <BRockwood@homestead-inc.com> wrote: Hey Paul. Because I never said I was building a "quick disaster recovery" solution. :) The recovery system I'm building is more of a "at least we've got another copy" solution. We don't have cash for a nearline which leaves us in a hole. I'm looking to temporarily fill that hole by leveraging old 840's to at least keep a copy of the data on untill we can one day cough up the cash for a proper nearline. I'm wanting to use NDMP perhaps predominantly because this is what it was intended to do. SnapMirror and SnapVault are undoubtably the better solutions, but I'd like to try and utilize NDMP rather than just give up on it as a slow useless system of backup/recovery. If NDMP would just run at the speeds that the filers are capable of I'd be doing ok. I'm leaving Snapmirror/Snapvault off the table for now. benr. -----Original Message----- From: Paul Galjan [mailto:galjan@gmail.com] Sent: Wed 1/5/2005 4:52 PM To: Ben Rockwood Cc: toasters@mathworks.com Subject: Re: NDMP Tuning Hi Ben, I'll be the first to say that this doesn't answer your question, but === message truncated === __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com /dev/null devnull@adc.idt.com