Not if you have 2.3M files...
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Linux Admin Sent: Thursday, January 06, 2005 8:13 AM To: devnull@adc.idt.com Cc: McCarthy, Tim; NetApp Toasters List Subject: Re: vol copy
qtree snapmirro is very usefull for that
On Thu, 6 Jan 2005 10:31:49 -0500 (EST), devnull@adc.idt.com devnull@adc.idt.com wrote:
What would be really cool is if you would let us copy/move qtrees
between
volumes. I find myself having to do that more often that i copy entire volumes.
Right now, i end up doing it over the network which kinda seems unnecessary.
Thanks On Thu, 6 Jan 2005, Jerry wrote:
Date: Thu, 6 Jan 2005 07:14:10 -0800 (PST) From: Jerry juanino@yahoo.com To: "McCarthy, Tim" timothy.mccarthy@netapp.com, Paul Galjan galjan@gmail.com, Ben Rockwood
Cc: toasters@mathworks.com Subject: vol copy
Whoa. That's really cool.
--- "McCarthy, Tim" timothy.mccarthy@netapp.com wrote:
How "wide" are your directories? They play a very important role in the backup process. The more files in any given directory, the longer it takes to parse.
How about this: VOLCOPY
This only works at a volume level, it is all or none and no incrementals available...without a snapmirror license.
You could do this:
Source=vol1 Dest=dvol
From the Dest:
vol restrict dvol
from the source: vol copy -S start vol1 dest:dvol (-->that is a cap -S)
this will copy the source volume, including all snapshots to the destination volume. (if you do not want the snapshots, drop the -S)
The catch: You must copy the whole volume and you must use a whole volume on the destination to copy to. Any information on the destination volume will be eradicated.
This will work in 6.x and 7.x although in 7.x you can only copy between like volumes. Flex to Flex and Traditional to Traditional.
--tmac
-----Original Message----- From: Paul Galjan [mailto:galjan@gmail.com] Sent: Wednesday, January 05, 2005 10:32 PM To: Ben Rockwood Cc: toasters@mathworks.com Subject: Re: NDMP Tuning
Hey, you've always got cygwin ;-).
Seriously, though. 2.3M files is a serious number of files. I used to have a 180G home directory partition for about 300 users with only about half that number of inodes. Even with rsync, it took about 4 hours to move that guy over to the destination, even when less tha n 500 MB had changed.
It underscores the point that block level replication will have better performance than file level replication, whether you end up using QSM or ndmpcopy. It really is worth it to bang on your rep to get snapmirror for an amount that you can afford. I almost guarantee it would save you orders of magnitude in replication time.
--paul On Wed, 05 Jan 2005 18:19:48 -0800, Ben Rockwood brockwood@homestead-inc.com wrote:
Paul Galjan wrote:
Cool then.
In that case I would look at rsync and/or
robocopy (in a windows only
env). Not that rsync is a block level protocol
(it evaluates on the
file level), but perhaps it would provide better
performance with
smaller backup windows?
Rsync is certainly a possiblity. I'm afraid I'd
have some problems
being as in this enviroment the filers are being
used CIFS only, which
makes file level interaction for an old UNIX
zealot like me less than
entertaining.
To put a better point on it: NDMP is just a
wrapper around the UNIX
dump command. It's no better, nor worse than it,
and that's the
reason I asked. The dump command (and NDMP by
extension) is for
backup, not DR. It is a clunky protocol in terms
of straight
replication, and that's why Netapp and others
offer alternatives for
replication...
In the end though, we should get to your problem:
how many inodes are
we looking at? And what happens in Pass 4, Stage
1? The inode number
would be my first suspect.
Right. I haven't looked at the code itself to see
exactly what it's
doing (I probly should at some point) but Stage 1
of Pass IV seems to be
all about inode creation prior to copying in all
the data. The source
volume has 2.3million inodes in use. That does't
seem like an
outragious number, and this is a pretty small
filer all things
considered. How creation of 2.3 million inodes
can consume 3 hours is
beyond my understanding. During that time the
destination filers CPU is
nearly idle. The only explanation I can dream up
is that the proccess
of creating inodes is happening so quickly that
the bulk of system time
is spent in context switches not in execution, and
hense a false sense
of idle-ness... but thats a pretty BS explanation
since even if that
were the case it still wouldn't take 3 hours.
All the evidence I've seen thus far with NDMP
suggests that I just need
to turn up the flow. Is there an idle loop in the
dump code? Thats
exactly what it feels like. Anyone know is there
is an OnTap equivelent
to truss?
benr.
--paul
On Wed, 5 Jan 2005 17:12:25 -0800, Ben Rockwood BRockwood@homestead-inc.com wrote:
Hey Paul.
Because I never said I was building a "quick
disaster recovery" solution. :)
The recovery system I'm building is more of a
"at least we've got another copy" solution. We don't have cash for a nearline which leaves us in a hole. I'm looking to temporarily fill that hole by leveraging old 840's to at least keep a copy of the data on untill we can one day cough up the cash for a proper nearline. I'm wanting to use NDMP perhaps predominantly because this is what it was intended to do. SnapMirror and SnapVault are undoubtably the better solutions, but I'd like to try and utilize NDMP rather than just give up on it as a slow useless system of backup/recovery. If NDMP would just run at the speeds that the filers are capable of I'd be doing ok. I'm leaving Snapmirror/Snapvault off the table for now.
benr.
-----Original Message----- From: Paul Galjan [mailto:galjan@gmail.com] Sent: Wed 1/5/2005 4:52 PM To: Ben Rockwood Cc: toasters@mathworks.com Subject: Re: NDMP Tuning Hi Ben,
I'll be the first to say that this doesn't
answer your question, but
=== message truncated ===
Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
/dev/null
devnull@adc.idt.com