What would be really cool is if you would let us copy/move qtrees between volumes. I find myself having to do that more often that i copy entire volumes.
Right now, i end up doing it over the network which kinda seems unnecessary.
Thanks
On Thu, 6 Jan 2005, Jerry wrote:
Date: Thu, 6 Jan 2005 07:14:10 -0800 (PST)
From: Jerry <juanino@yahoo.com>
To: "McCarthy, Tim" <timothy.mccarthy@netapp.com>,
Paul Galjan <galjan@gmail.com>, Ben Rockwood <brockwood@homestead-inc.com>
Cc: toasters@mathworks.com
Subject: vol copy
Whoa. That's really cool.
--- "McCarthy, Tim" <timothy.mccarthy@netapp.com>
wrote:
How "wide" are your directories? They play a very=== message truncated ===
important role in the
backup process. The more files in any given
directory, the longer it
takes to parse.
How about this: VOLCOPY
This only works at a volume level, it is all or none
and no incrementals
available...without a snapmirror license.
You could do this:
Source=vol1 Dest=dvol
From the Dest:vol restrict dvol
from the source:
vol copy -S start vol1 dest:dvol (-->that is a cap
-S)
this will copy the source volume, including all
snapshots to the
destination volume. (if you do not want the
snapshots, drop the -S)
The catch: You must copy the whole volume and you
must use a whole
volume on the destination to copy to. Any
information on the destination
volume will be eradicated.
This will work in 6.x and 7.x although in 7.x you
can only copy between
like volumes. Flex to Flex and Traditional to
Traditional.
--tmac
-----Original Message-----
From: Paul Galjan [mailto:galjan@gmail.com]
Sent: Wednesday, January 05, 2005 10:32 PM
To: Ben Rockwood
Cc: toasters@mathworks.com
Subject: Re: NDMP Tuning
Hey, you've always got cygwin ;-).
Seriously, though. 2.3M files is a serious number
of files. I used
to have a 180G home directory partition for about
300 users with only
about half that number of inodes. Even with rsync,
it took about 4
hours to move that guy over to the destination, even
when less tha n
500 MB had changed.
It underscores the point that block level
replication will have better
performance than file level replication, whether you
end up using QSM
or ndmpcopy. It really is worth it to bang on your
rep to get
snapmirror for an amount that you can afford. I
almost guarantee it
would save you orders of magnitude in replication
time.
--paul
On Wed, 05 Jan 2005 18:19:48 -0800, Ben Rockwood
<brockwood@homestead-inc.com> wrote:
Paul Galjan wrote:robocopy (in a windows only
Cool then.
In that case I would look at rsync and/or
(it evaluates on theenv). Not that rsync is a block level protocol
performance withfile level), but perhaps it would provide better
have some problemssmaller backup windows?Rsync is certainly a possiblity. I'm afraid I'd
being as in this enviroment the filers are beingused CIFS only, which
makes file level interaction for an old UNIXzealot like me less than
entertaining.wrapper around the UNIX
To put a better point on it: NDMP is just a
and that's thedump command. It's no better, nor worse than it,
extension) is forreason I asked. The dump command (and NDMP by
of straightbackup, not DR. It is a clunky protocol in terms
offer alternatives forreplication, and that's why Netapp and others
how many inodes arereplication...
In the end though, we should get to your problem:
1? The inode numberwe looking at? And what happens in Pass 4, Stage
exactly what it'swould be my first suspect.Right. I haven't looked at the code itself to see
doing (I probly should at some point) but Stage 1of Pass IV seems to
be
all about inode creation prior to copying in allthe data. The source
volume has 2.3million inodes in use. That does'tseem like an
outragious number, and this is a pretty smallfiler all things
considered. How creation of 2.3 million inodescan consume 3 hours is
beyond my understanding. During that time thedestination filers CPU
is
nearly idle. The only explanation I can dream upis that the proccess
of creating inodes is happening so quickly thatthe bulk of system
time
is spent in context switches not in execution, andhense a false sense
of idle-ness... but thats a pretty BS explanationsince even if that
were the case it still wouldn't take 3 hours.suggests that I just
All the evidence I've seen thus far with NDMP
need
to turn up the flow. Is there an idle loop in thedump code? Thats
exactly what it feels like. Anyone know is thereis an OnTap
equivelent
to truss?disaster recovery"
benr.
--paul
On Wed, 5 Jan 2005 17:12:25 -0800, Ben Rockwood
<BRockwood@homestead-inc.com> wrote:
Hey Paul.
Because I never said I was building a "quick
solution. :)
"at least we've gotThe recovery system I'm building is more of a
another copy" solution. We don't have cash for a
nearline which leaves
us in a hole. I'm looking to temporarily fill that
hole by leveraging
old 840's to at least keep a copy of the data on
untill we can one day
cough up the cash for a proper nearline. I'm
wanting to use NDMP
perhaps predominantly because this is what it was
intended to do.
SnapMirror and SnapVault are undoubtably the better
solutions, but I'd
like to try and utilize NDMP rather than just give
up on it as a slow
useless system of backup/recovery. If NDMP would
just run at the speeds
that the filers are capable of I'd be doing ok. I'm
leaving
Snapmirror/Snapvault off the table for now.
answer your question, but
benr.
-----Original Message-----
From: Paul Galjan [mailto:galjan@gmail.com]
Sent: Wed 1/5/2005 4:52 PM
To: Ben Rockwood
Cc: toasters@mathworks.com
Subject: Re: NDMP Tuning
Hi Ben,
I'll be the first to say that this doesn't
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
/dev/null
devnull@adc.idt.com