What are your recommendations of copying data from one filer to another?
I have estimated time of 28 hours to transfer 70 GB via a copy command on the host machine.
I have tried to copy via CIFS accessing from NT servers using drag and drop. Encountered errors which halted the copy process due to files that were linked to incorrect locations. Unix files in the users desktop were linked to other machines and the path was invalid. Did a grep to locate these linked files in the user folders, and found too many to change one by one.
The vol copy command is only useful when copying from volume to volume on the same filer, correct?
I am looking for the most efficient way to move data.
Any suggestions would be great!
Jessica
JESSICA A. S. FERNANDEZ ESA-FM Facility Management E-mail: jasf@lanl.gov TA-16-661-101, MS-C933 Voice: 505-665-8051 Los Alamos National Laboratory Pager: 104-6707 Los Alamos, New Mexico 87545 FAX: 505-665-9490
I would imagine the most efficient method is to use snapmirror. However, if you have the problem I do which is wanting to copy specific data from one volume to another you use something like ndmpcopy. My particular case is special and ndmpcopy doesn't work for me either due to filesystem fragmentation so rsync is my only option.
~JK
Jessica Fernandez wrote:
What are your recommendations of copying data from one filer to another?
I have estimated time of 28 hours to transfer 70 GB via a copy command on the host machine.
I have tried to copy via CIFS accessing from NT servers using drag and drop. Encountered errors which halted the copy process due to files that were linked to incorrect locations. Unix files in the users desktop were linked to other machines and the path was invalid. Did a grep to locate these linked files in the user folders, and found too many to change one by one.
The vol copy command is only useful when copying from volume to volume on the same filer, correct?
I am looking for the most efficient way to move data.
Any suggestions would be great!
Jessica
JESSICA A. S. FERNANDEZ ESA-FM Facility Management E-mail: jasf@lanl.gov TA-16-661-101, MS-C933 Voice: 505-665-8051 Los Alamos National Laboratory Pager: 104-6707 Los Alamos, New Mexico 87545 FAX: 505-665-9490
Hi Jessica,
I'm curious why you would think filesystem fragmentation would prevent the use of NDMPcopy?
NDMPcopy, by its nature, acts as a natural file system defragmenter -- that is, the system that is the destination of the NDMPcopy has pretty much an ideal file system layout for performance.
And because the NDMPcopy data stream generator is integrated into the filer, it should be able to read a fragmented file system better than rsync...
Is your experience different than my theory, or am I missing something blindingly obvious?
Stephen Manley DAM and NDMP Circus Clown
I would imagine the most efficient method is to use snapmirror. However, if you have the problem I do which is wanting to copy specific data from one volume to another you use something like ndmpcopy. My particular case is special and ndmpcopy doesn't work for me either due to filesystem fragmentation so rsync is my only option.
~JK
Jessica Fernandez wrote:
What are your recommendations of copying data from one filer to another?
I have estimated time of 28 hours to transfer 70 GB via a copy command on the host machine.
I have tried to copy via CIFS accessing from NT servers using drag and drop. Encountered errors which halted the copy process due to files that were linked to incorrect locations. Unix files in the users desktop were linked to other machines and the path was invalid. Did a grep to locate these linked files in the user folders, and found too many to change one by one.
The vol copy command is only useful when copying from volume to volume on the same filer, correct?
I am looking for the most efficient way to move data.
Any suggestions would be great!
Jessica
JESSICA A. S. FERNANDEZ ESA-FM Facility Management E-mail: jasf@lanl.gov TA-16-661-101, MS-C933 Voice: 505-665-8051 Los Alamos National Laboratory Pager: 104-6707 Los Alamos, New Mexico 87545 FAX: 505-665-9490
--
Jeff Kennedy Unix Administrator AMCC jlkennedy@amcc.com
Stephen,
Actually it was me that had the problem with ndmpcopy....
Problem: File system was fragmented due to manner in which addition of disks was done. We would add 1 or 2 at a time over the last 2 years. It got to the point where backups were kicked off but it would be 3+ hours before it would write to tape.
ndmpcopy: When I attempted to use ndmpcopy to split this volume up I got the same write delay and a transfer rate of 1gb/hr. Yes, that's ONE GB per HOUR. So, 156 hours later........ Not a chance.
A NetApp SE was onsite to witness this and had no idea why it was doing this. Finally settled on rsync. Do you have any idea why it would do this?
~JK
Stephen Manley wrote:
Hi Jessica,
I'm curious why you would think filesystem fragmentation would prevent the use of NDMPcopy?
NDMPcopy, by its nature, acts as a natural file system defragmenter -- that is, the system that is the destination of the NDMPcopy has pretty much an ideal file system layout for performance.
And because the NDMPcopy data stream generator is integrated into the filer, it should be able to read a fragmented file system better than rsync...
Is your experience different than my theory, or am I missing something blindingly obvious?
Stephen Manley DAM and NDMP Circus Clown
I would imagine the most efficient method is to use snapmirror. However, if you have the problem I do which is wanting to copy specific data from one volume to another you use something like ndmpcopy. My particular case is special and ndmpcopy doesn't work for me either due to filesystem fragmentation so rsync is my only option.
~JK
Jessica Fernandez wrote:
What are your recommendations of copying data from one filer to another?
I have estimated time of 28 hours to transfer 70 GB via a copy command on the host machine.
I have tried to copy via CIFS accessing from NT servers using drag and drop. Encountered errors which halted the copy process due to files that were linked to incorrect locations. Unix files in the users desktop were linked to other machines and the path was invalid. Did a grep to locate these linked files in the user folders, and found too many to change one by one.
The vol copy command is only useful when copying from volume to volume on the same filer, correct?
I am looking for the most efficient way to move data.
Any suggestions would be great!
Jessica
JESSICA A. S. FERNANDEZ ESA-FM Facility Management E-mail: jasf@lanl.gov TA-16-661-101, MS-C933 Voice: 505-665-8051 Los Alamos National Laboratory Pager: 104-6707 Los Alamos, New Mexico 87545 FAX: 505-665-9490
--
Jeff Kennedy Unix Administrator AMCC jlkennedy@amcc.com
Stephen,
Actually it was me that had the problem with ndmpcopy....
Problem: File system was fragmented due to manner in which addition of disks was done. We would add 1 or 2 at a time over the last 2 years. It got to the point where backups were kicked off but it would be 3+ hours before it would write to tape.
ndmpcopy: When I attempted to use ndmpcopy to split this volume up I got the same write delay and a transfer rate of 1gb/hr. Yes, that's ONE GB per HOUR. So, 156 hours later........ Not a chance.
Hmmm... What version of Ontap were you using?
Specifically, was it pre-5.3.4? Was it pre-6.0?
Second, I'm guessing you had the checksum code enabled?
Third, I assume the "time before writing" was Pass I and II of dump -- mapping phases?
A NetApp SE was onsite to witness this and had no idea why it was doing this. Finally settled on rsync. Do you have any idea why it would do this?
I may have a theory, depending on your answers to the questions above.
Sorry I confused you with Jessica. I'd make an excuse about your names starting with the same letter, but I should know better. After all, I expect people to keep all million Stephen/Steven/Steves running around NetApp straight. :)
Sorry, Stephen Manley DAM and NDMP Stephen
NP about the name. Just keeping you informed..)
The Ontap version was/is 5.3.6R2. Not sure about the checksum code, how do I tell? Not sure about mapping phases either, software (NetBackup 3.4) is new and I don't know enough about it yet to figure out how to watch detailed progress. But we've had this problem with the old BudTool software too.
~JK
Stephen Manley wrote:
Stephen,
Actually it was me that had the problem with ndmpcopy....
Problem: File system was fragmented due to manner in which addition of disks was done. We would add 1 or 2 at a time over the last 2 years. It got to the point where backups were kicked off but it would be 3+ hours before it would write to tape.
ndmpcopy: When I attempted to use ndmpcopy to split this volume up I got the same write delay and a transfer rate of 1gb/hr. Yes, that's ONE GB per HOUR. So, 156 hours later........ Not a chance.
Hmmm... What version of Ontap were you using?
Specifically, was it pre-5.3.4? Was it pre-6.0?
Second, I'm guessing you had the checksum code enabled?
Third, I assume the "time before writing" was Pass I and II of dump -- mapping phases?
A NetApp SE was onsite to witness this and had no idea why it was doing this. Finally settled on rsync. Do you have any idea why it would do this?
I may have a theory, depending on your answers to the questions above.
Sorry I confused you with Jessica. I'd make an excuse about your names starting with the same letter, but I should know better. After all, I expect people to keep all million Stephen/Steven/Steves running around NetApp straight. :)
Sorry, Stephen Manley DAM and NDMP Stephen
NP about the name. Just keeping you informed..)
The Ontap version was/is 5.3.6R2. Not sure about the checksum code, how do I tell? Not sure about mapping phases either, software (NetBackup 3.4) is new and I don't know enough about it yet to figure out how to watch detailed progress. But we've had this problem with the old BudTool software too.
First, I think you want Ontap 6.0 or later... Steve Fong optimized code paths that I think you especially will really notice in the newer releases.
Second, think you are also encountering the "downhill dump stream" performance issue.
Basically, the way that the BSD dump stream format works, we need to: 1) Figure out what to put on tape (no data is written out, but we do a lot of work) 2) Write the directories on tape (data goes to tape, but we need to do work on the data as it is written out, so MB/s is increasing but not at our peak) 3) Write the data out to tape (and here, we just read like crazy and write it all out to media)
So, in general, if you've got a 20 GB/hr dump, it means that the data phase (#3) is actually running faster than 20 GB/hr to compensate for the slower earlier stages.
I call it downhill dump stream performance because we start slowly but end up barrelling down the hill with no brakes. Sort of like my parents' old Hyundai Excel... ;)
A third thing to check would be to determine if you have File History enabled on NetBackup. If you do, you might try disabling it for one backup -- to see if it improves your performance. If it does, you could talk to your SE for tips on backing up with File History.
Regardless, I think you'd definitely benefit from 6.0 for dump performance, not to mention the other benefits.
Stephen Manley DAM and NDMP Saturn Owner
Stephen Manley wrote:
Stephen,
Actually it was me that had the problem with ndmpcopy....
Problem: File system was fragmented due to manner in which addition of disks was done. We would add 1 or 2 at a time over the last 2 years. It got to the point where backups were kicked off but it would be 3+ hours before it would write to tape.
ndmpcopy: When I attempted to use ndmpcopy to split this volume up I got the same write delay and a transfer rate of 1gb/hr. Yes, that's ONE GB per HOUR. So, 156 hours later........ Not a chance.
Hmmm... What version of Ontap were you using?
Specifically, was it pre-5.3.4? Was it pre-6.0?
Second, I'm guessing you had the checksum code enabled?
Third, I assume the "time before writing" was Pass I and II of dump -- mapping phases?
A NetApp SE was onsite to witness this and had no idea why it was doing this. Finally settled on rsync. Do you have any idea why it would do this?
I may have a theory, depending on your answers to the questions above.
Sorry I confused you with Jessica. I'd make an excuse about your names starting with the same letter, but I should know better. After all, I expect people to keep all million Stephen/Steven/Steves running around NetApp straight. :)
Sorry, Stephen Manley DAM and NDMP Stephen
--
Jeff Kennedy Unix Administrator AMCC jlkennedy@amcc.com
One other nice feature of the ndmpcopy utility is that it allows you to run a level zero and then a level 1 once services have been shut off on the source filer. This makes for a nice almost-no interrupt transition from one system to another. One last feature to brag about is that you can ndmpcopy by tree or directory or subdirectory.
Mike Smith Escalations Jerk
----- Original Message ----- From: "Stephen Manley" stephen@netapp.com To: "Jeff Kennedy" jlkennedy@amcc.com Cc: jasf@lanl.gov; toasters@mathworks.com Sent: Friday, April 06, 2001 4:44 PM Subject: Re: Data Copy from Filer to Filer
Hi Jessica,
I'm curious why you would think filesystem fragmentation would prevent the use of NDMPcopy?
NDMPcopy, by its nature, acts as a natural file system defragmenter -- that is, the system that is the destination of the NDMPcopy has pretty much an ideal file system layout for performance.
And because the NDMPcopy data stream generator is integrated into the filer, it should be able to read a fragmented file system better than rsync...
Is your experience different than my theory, or am I missing something blindingly obvious?
Stephen Manley DAM and NDMP Circus Clown
I would imagine the most efficient method is to use snapmirror. However, if you have the problem I do which is wanting to copy specific data from one volume to another you use something like ndmpcopy. My particular case is special and ndmpcopy doesn't work for me either due to filesystem fragmentation so rsync is my only option.
~JK
Jessica Fernandez wrote:
What are your recommendations of copying data from one filer to another?
I have estimated time of 28 hours to transfer 70 GB via a copy command on the host machine.
I have tried to copy via CIFS accessing from NT servers using drag and drop. Encountered errors which halted the copy process due to files that were linked to incorrect locations. Unix files in the users desktop were linked to other machines and the path was invalid. Did a grep to locate these linked files in the user folders, and found too many to change one by one.
The vol copy command is only useful when copying from volume to volume on the same filer, correct?
I am looking for the most efficient way to move data.
Any suggestions would be great!
Jessica
JESSICA A. S. FERNANDEZ ESA-FM Facility Management E-mail: jasf@lanl.gov TA-16-661-101, MS-C933 Voice: 505-665-8051 Los Alamos National Laboratory Pager: 104-6707 Los Alamos, New Mexico 87545 FAX: 505-665-9490
--
Jeff Kennedy Unix Administrator AMCC jlkennedy@amcc.com
The last time we tried incremental ndmpcopy, it corrupted files on the destination. We were running 5.3.4R3P2 on both filers. We did one level 0 and then a level 1. When complete, the engineers showed us many files that we were trashed. The appeared to be "concatenated into themselves" in random locations (my best description). We ended up rsync'ing everything to fix it.
Has anyone else seen that behavior or is it a registered bug? If so has it been fixed or is there a work-around? We love using ndmpcopy and would really like to use it incrementally.
Also, are there any plans to make the C source version of ndmpcopy have "infinite incremental" capabilities as the jndmpcopy version does? By this I mean it allows for more than 9 incremental copies.
-- Jeff
On Fri, Apr 06, 2001 at 05:57:19PM -0700, Mike Smith wrote:
One other nice feature of the ndmpcopy utility is that it allows you to run a level zero and then a level 1 once services have been shut off on the source filer. This makes for a nice almost-no interrupt transition from one system to another. One last feature to brag about is that you can ndmpcopy by tree or directory or subdirectory.
Mike Smith Escalations Jerk
----- Original Message ----- From: "Stephen Manley" stephen@netapp.com To: "Jeff Kennedy" jlkennedy@amcc.com Cc: jasf@lanl.gov; toasters@mathworks.com Sent: Friday, April 06, 2001 4:44 PM Subject: Re: Data Copy from Filer to Filer
Hi Jessica,
I'm curious why you would think filesystem fragmentation would prevent the use of NDMPcopy?
NDMPcopy, by its nature, acts as a natural file system defragmenter -- that is, the system that is the destination of the NDMPcopy has pretty much an ideal file system layout for performance.
And because the NDMPcopy data stream generator is integrated into the filer, it should be able to read a fragmented file system better than rsync...
Is your experience different than my theory, or am I missing something blindingly obvious?
Stephen Manley DAM and NDMP Circus Clown
I would imagine the most efficient method is to use snapmirror. However, if you have the problem I do which is wanting to copy specific data from one volume to another you use something like ndmpcopy. My particular case is special and ndmpcopy doesn't work for me either due to filesystem fragmentation so rsync is my only option.
~JK
Jessica Fernandez wrote:
What are your recommendations of copying data from one filer to another?
I have estimated time of 28 hours to transfer 70 GB via a copy command on the host machine.
I have tried to copy via CIFS accessing from NT servers using drag and drop. Encountered errors which halted the copy process due to files that were linked to incorrect locations. Unix files in the users desktop were linked to other machines and the path was invalid. Did a grep to locate these linked files in the user folders, and found too many to change one by one.
The vol copy command is only useful when copying from volume to volume on the same filer, correct?
I am looking for the most efficient way to move data.
Any suggestions would be great!
Jessica
JESSICA A. S. FERNANDEZ ESA-FM Facility Management E-mail: jasf@lanl.gov TA-16-661-101, MS-C933 Voice: 505-665-8051 Los Alamos National Laboratory Pager: 104-6707 Los Alamos, New Mexico 87545 FAX: 505-665-9490
--
Jeff Kennedy Unix Administrator AMCC jlkennedy@amcc.com
Sounds like a case for Support at Netapp.
I haven't used the jndmpcopy version yet so I'm not familiar with what you describe as "infinite incremental capabilities".
Mike Smith Escalations jerk.
----- Original Message ----- From: "Jeffrey Krueger" jkrueger@qualcomm.com To: "Mike Smith" mikesmit@netapp.com Cc: "Stephen Manley" stephen@netapp.com; "Jeff Kennedy" jlkennedy@amcc.com; jasf@lanl.gov; toasters@mathworks.com Sent: Friday, April 06, 2001 7:07 PM Subject: Re: Data Copy from Filer to Filer
The last time we tried incremental ndmpcopy, it corrupted files on the destination. We were running 5.3.4R3P2 on both filers. We did one level
0
and then a level 1. When complete, the engineers showed us many files
that
we were trashed. The appeared to be "concatenated into themselves" in random locations (my best description). We ended up rsync'ing everything to fix it.
Has anyone else seen that behavior or is it a registered bug? If so has
it
been fixed or is there a work-around? We love using ndmpcopy and would really like to use it incrementally.
Also, are there any plans to make the C source version of ndmpcopy have "infinite incremental" capabilities as the jndmpcopy version does? By
this
I mean it allows for more than 9 incremental copies.
-- Jeff
On Fri, Apr 06, 2001 at 05:57:19PM -0700, Mike Smith wrote:
One other nice feature of the ndmpcopy utility is that it allows you to run a level zero and then a level 1 once services have been shut off on the source filer. This makes for a nice almost-no interrupt transition from one system to another. One last feature to brag about is that you can ndmpcopy by tree or directory or subdirectory.
Mike Smith Escalations Jerk
----- Original Message ----- From: "Stephen Manley" stephen@netapp.com To: "Jeff Kennedy" jlkennedy@amcc.com Cc: jasf@lanl.gov; toasters@mathworks.com Sent: Friday, April 06, 2001 4:44 PM Subject: Re: Data Copy from Filer to Filer
Hi Jessica,
I'm curious why you would think filesystem fragmentation would prevent the use of NDMPcopy?
NDMPcopy, by its nature, acts as a natural file system defragmenter -- that is, the system that is the destination of the NDMPcopy has pretty much an ideal file system layout for performance.
And because the NDMPcopy data stream generator is integrated into the filer, it should be able to read a fragmented file system better than rsync...
Is your experience different than my theory, or am I missing something blindingly obvious?
Stephen Manley DAM and NDMP Circus Clown
I would imagine the most efficient method is to use snapmirror. However, if you have the problem I do which is wanting to copy
specific
data from one volume to another you use something like ndmpcopy. My particular case is special and ndmpcopy doesn't work for me either
due
to filesystem fragmentation so rsync is my only option.
~JK
Jessica Fernandez wrote:
What are your recommendations of copying data from one filer to another?
I have estimated time of 28 hours to transfer 70 GB via a copy
command
on the host machine.
I have tried to copy via CIFS accessing from NT servers using drag
and
drop. Encountered errors which halted the copy process due to
files
that were linked to incorrect locations. Unix files in the users desktop were linked to other machines and the path was invalid.
Did a
grep to locate these linked files in the user folders, and found
too
many to change one by one.
The vol copy command is only useful when copying from volume to
volume
on the same filer, correct?
I am looking for the most efficient way to move data.
Any suggestions would be great!
Jessica
JESSICA A. S. FERNANDEZ ESA-FM Facility Management E-mail: jasf@lanl.gov TA-16-661-101, MS-C933 Voice: 505-665-8051 Los Alamos National Laboratory Pager: 104-6707 Los Alamos, New Mexico
87545
FAX: 505-665-9490
--
Jeff Kennedy Unix Administrator AMCC jlkennedy@amcc.com
On Sat, 7 Apr 2001, Mike Smith wrote:
I haven't used the jndmpcopy version yet so I'm not familiar with what you describe as "infinite incremental capabilities".
I'd like this as well... do a "level 0" dump today, and then *only* do incremental dumps after that. This is "incremental" as in "what-changed-since-the-last-dump-of-any-type" vs. "differential" as in "what-changed-since-the-last-full-dump".
On Sat, Apr 07, 2001 at 11:08:31AM -0400, Brian Tao wrote:
I'd like this as well... do a "level 0" dump today, and then
*only* do incremental dumps after that. This is "incremental" as in "what-changed-since-the-last-dump-of-any-type" vs. "differential" as in "what-changed-since-the-last-full-dump".
Great clarification of terms there, Brian. =)
Here's the jndmpcopy usage:
% ./jndmpcopy testusage: JNdmpcopy src_filer:/src/dir dst_filer:/dst/dir [-o options_file] [-sa none | user:password] [-da none | user:password] [-sport ndmp_src_port] [-daddr dest_ip_addr] [-dport ndmp_dest_port] [-level 0-9|i] [-exclude exclude_string] [-version ndmp_version] [-v] [-q] [-h]
Here's a snipit from README2 in the package:
-level 0-9|i The level of the dump to use. 0 is a dump of the full tree, 1 is the incremental since the level 0, level 2 is the incremental since the level 1, ...
For ONTAP 5.3 and later, level 'i' is the incremental since the last level 'i' (effectively incrementals forever).
You can get jndmpcopy at:
ftp://ftp.ndmp.org/pub/version.3/contrib/jndmpcopy.tar.gz
-- Jeff
Jeffrey Krueger wrote:
Also, are there any plans to make the C source version of ndmpcopy have "infinite incremental" capabilities as the jndmpcopy version does? By this I mean it allows for more than 9 incremental copies.
-- Jeff
...
-level 0-9|i The level of the dump to use. 0 is a dump of the full tree, 1 is the incremental since the level 0, level 2 is the incremental since the level 1, ...
For ONTAP 5.3 and later, level 'i' is the incremental since the last level 'i' (effectively incrementals forever).
Here are the diffs to add the "-level i" to ndmpcopy. No guarantees.
The "-level i" seems to copy the changes since the last level i, rather than since the last dump of any level (ie the first level i is always a level 0).
- Bruce -- Bruce Arden arden@nortelnetworks.com Nortel Networks, London Rd, Harlow, England +44 1279 40 2877 ---------------------------------------------------------
diff -c .snapshot/weekly.0/dump.c ./dump.c *** .snapshot/weekly.0/dump.c Tue Jun 22 19:41:05 1999 --- ./dump.c Wed Apr 18 19:03:14 2001 *************** *** 186,193 **** environment[3].value = "n"; environment[4].name = "FILESYSTEM"; environment[4].value = opts.src_dir; ! environment[5].name = "LEVEL"; ! environment[5].value = opts.level; environment[6].name = "EXTRACT"; environment[6].value = opts.extract;
--- 186,199 ---- environment[3].value = "n"; environment[4].name = "FILESYSTEM"; environment[4].value = opts.src_dir; ! if (strcmp(opts.level, "i") == 0) { ! environment[5].name = "REPLICATE"; ! environment[5].value = "Y"; ! } ! else { ! environment[5].name = "LEVEL"; ! environment[5].value = opts.level; ! } environment[6].name = "EXTRACT"; environment[6].value = opts.extract;
diff -c .snapshot/weekly.0/main.c ./main.c *** .snapshot/weekly.0/main.c Tue Jun 22 19:41:39 1999 --- ./main.c Thu Apr 19 10:18:36 2001 *************** *** 93,99 **** " dest_auth_password = %s\n" " ndmp_src_port = %d (0 means NDMP default, usually 10000)\n" " ndmp_dest_port = %d (0 means NDMP default, usually 10000)\n" ! " ndmp_dump_level = %s (valid range: 0 - 9)\n" " ndmp_dest_ip_addr = %s (no default: user needs to override dest_filer value)\n" " verbosity = %s\n" " different_passwords = %s\n\n", --- 93,99 ---- " dest_auth_password = %s\n" " ndmp_src_port = %d (0 means NDMP default, usually 10000)\n" " ndmp_dest_port = %d (0 means NDMP default, usually 10000)\n" ! " ndmp_dump_level = %s (valid range: 0 - 9 or i)\n" " ndmp_dest_ip_addr = %s (no default: user needs to override dest_filer value)\n" " verbosity = %s\n" " different_passwords = %s\n\n", *************** *** 166,172 **** opts.level[0] = **curarg; /* only recognize first digit */ opts.level[1] = '\0';
! if (opts.level[0] < '0' || opts.level[0] > '9') { fprintf(stderr,"Error: Invalid level %s.\n",opts.level); usage(); --- 166,173 ---- opts.level[0] = **curarg; /* only recognize first digit */ opts.level[1] = '\0';
! if ((opts.level[0] < '0' || opts.level[0] > '9') && ! opts.level[0] != 'i') { fprintf(stderr,"Error: Invalid level %s.\n",opts.level); usage();
Jeffrey Krueger wrote:
Also, are there any plans to make the C source version of ndmpcopy have "infinite incremental" capabilities as the jndmpcopy version does? By this I mean it allows for more than 9 incremental copies.
-- Jeff
...
-level 0-9|i The level of the dump to use. 0 is a dump of the full tree, 1 is the incremental since the level 0, level 2 is the incremental since the level 1, ...
For ONTAP 5.3 and later, level 'i' is the incremental since the last level 'i' (effectively incrementals forever).
Here are the diffs to add the "-level i" to ndmpcopy. No guarantees.
The "-level i" seems to copy the changes since the last level i, rather than since the last dump of any level (ie the first level i is always a level 0).
Exactly true. The level i NDMPcopies run completely independent of any numbered level.
The primary benefit of this is: You are trying to run NDMPcopies at the same time you run backups. It would be unfortunate if your regulalry scheduled backup entries interacted with your NDMPcopy entries.
Of course, you can also "name" the particular version of the dump/NDMPcopy you are running, but not all NDMP tools (including NDMPcopy) support this option, yet.
Stephen Manley DAM and NDMP weakest link
On Fri, Apr 06, 2001 at 07:07:55PM -0700, Jeffrey Krueger wrote:
The last time we tried incremental ndmpcopy, it corrupted files on the destination. We were running 5.3.4R3P2 on both filers. We did one level 0 and then a level 1. When complete, the engineers showed us many files that we were trashed. The appeared to be "concatenated into themselves" in random locations (my best description). We ended up rsync'ing everything to fix it.
Has anyone else seen that behavior or is it a registered bug? If so has it been fixed or is there a work-around? We love using ndmpcopy and would really like to use it incrementally.
Been there. It sounds like bug 20564, which is supposedly fixed in 5.3.5 and later.
-- Deron Johnson djohnson@amgen.com
+-- Jessica Fernandez jasf@lanl.gov once said: | The vol copy command is only useful when copying from volume to volume on | the same filer, correct?
It can be used from one filer to another. I have used this in the past to migrate data between filers we were updrading like this (this was NFS, I can't speak to CIFS since I don't do Windows):
1. Re-export old filer directories read only. 2. start a vol copy from old filer to new filer. 3. Export new filer volume rw. 4. Update client mount points to new filer, unmount and remount client partitions (it was easier to reboot in our case) 5. when no clients are still accessing old filer, you're done.
I found the vol copy went pretty fast. I can't recall exactly, but I think we migrated about 100GB in a couple of hours (like 3ish). This was over a GB link, however. I believe this is a very efficient way of copying data if the volume is pretty full because it's doing a low-level xfer. I'm not sure if it's as efficient if the volume is sparsely populated. Someone on this list will know better than I.
Good luck.
Oz