How "wide" are your directories? They play a very important role in the backup process. The more files in any given directory, the longer it takes to parse.
How about this: VOLCOPY
This only works at a volume level, it is all or none and no incrementals available...without a snapmirror license.
You could do this:
Source=vol1 Dest=dvol
From the Dest:
vol restrict dvol
from the source: vol copy -S start vol1 dest:dvol (-->that is a cap -S)
this will copy the source volume, including all snapshots to the destination volume. (if you do not want the snapshots, drop the -S)
The catch: You must copy the whole volume and you must use a whole volume on the destination to copy to. Any information on the destination volume will be eradicated.
This will work in 6.x and 7.x although in 7.x you can only copy between like volumes. Flex to Flex and Traditional to Traditional.
--tmac
-----Original Message----- From: Paul Galjan [mailto:galjan@gmail.com] Sent: Wednesday, January 05, 2005 10:32 PM To: Ben Rockwood Cc: toasters@mathworks.com Subject: Re: NDMP Tuning
Hey, you've always got cygwin ;-).
Seriously, though. 2.3M files is a serious number of files. I used to have a 180G home directory partition for about 300 users with only about half that number of inodes. Even with rsync, it took about 4 hours to move that guy over to the destination, even when less tha n 500 MB had changed.
It underscores the point that block level replication will have better performance than file level replication, whether you end up using QSM or ndmpcopy. It really is worth it to bang on your rep to get snapmirror for an amount that you can afford. I almost guarantee it would save you orders of magnitude in replication time.
--paul On Wed, 05 Jan 2005 18:19:48 -0800, Ben Rockwood brockwood@homestead-inc.com wrote:
be
is
nearly idle. The only explanation I can dream up is that the proccess of creating inodes is happening so quickly that the bulk of system
time
need
to turn up the flow. Is there an idle loop in the dump code? Thats exactly what it feels like. Anyone know is there is an OnTap
equivelent
solution. :)
The recovery system I'm building is more of a "at least we've got
another copy" solution. We don't have cash for a nearline which leaves us in a hole. I'm looking to temporarily fill that hole by leveraging old 840's to at least keep a copy of the data on untill we can one day cough up the cash for a proper nearline. I'm wanting to use NDMP perhaps predominantly because this is what it was intended to do. SnapMirror and SnapVault are undoubtably the better solutions, but I'd like to try and utilize NDMP rather than just give up on it as a slow useless system of backup/recovery. If NDMP would just run at the speeds that the filers are capable of I'd be doing ok. I'm leaving Snapmirror/Snapvault off the table for now.
much tuning is possible, but I'm trying to work out some serious slowness in NDMP Level 0's.
Plenty of people have had these issues before but I'm not finding
solutions on NOW or in forums. Here is a time breakdown of a L0 I did:
itself. The total transfer is about 58G from one 760 to another. It's the first stage of PassIV that really bothers me. During this first part of the pass there is very low CPU utilization and little IO. I need to speed up the process. Since the destination is a recovery filer and not serving data I don't care if it's CPU gets slammed or IO is pushed through the roof, I just need it done quicker.
Is it throttling or can I some how speed it up? I'm using gig as
the interconnect but as I understand it Pass IV Stage 1 is all about inode creation whereas Stage2 is the actual data transfer. The data transfer rate is roughly averaging 11MB/s between the two filers which is less than I'd like to see as well, the filer should be capable of handling a tranfer rate of 30MB/s pretty easily.
Any hints or tips from the experienced? This is effectively a test
setup before implementing a recovery system on our production 940's in which we'll be moving nearly 7TB of data. Given my findings so far it's going to be pretty nasty.
benr.
Whoa. That's really cool.
--- "McCarthy, Tim" timothy.mccarthy@netapp.com wrote:
=== message truncated ===
__________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
What would be really cool is if you would let us copy/move qtrees between volumes. I find myself having to do that more often that i copy entire volumes.
Right now, i end up doing it over the network which kinda seems unnecessary.
Thanks On Thu, 6 Jan 2005, Jerry wrote:
/dev/null
devnull@adc.idt.com
qtree snapmirro is very usefull for that
On Thu, 6 Jan 2005 10:31:49 -0500 (EST), devnull@adc.idt.com devnull@adc.idt.com wrote:
i would advise you to use the ndmpcopy command it enable ndmpcopy without passing throught the netork if it is a local copy you have to note that ndmpcopy doesn't remeber the "qtree state" of a directory, so you have to create the qtree before copying into it
for exemple : filer> qtree create /vol/vol1/myqtree filer> ndmpcopy -sa root:mypass -da root:mypass -st text -dt text -l0 /vol/vol0/myqtree/* /vol/vol1/
with [-l { 0 | 1 | 2 }] you can make a full, then incremental at 2 different level if you like
we manange some data migration (from old SCSI technologie to newer FC one) this way
devnull@adc.idt.com wrote: