Aleksandr Rainchik wrote:
I have a question: what should be the best way to transfer 20-25Gb of data (lot's of small files) from UNIX to NetApp?
Can I do ufsdump on UNIX, pipe it through rsh to NetApp and do restore there?
The answer is: yes, you can. And despite all the other suggestions, this would be my preferred method, both for performance and for transparency.
My second choice would be ufsdump piped to ufsrestore running on an NFS client.
Some folks have mentioned using tar, find|cpio, and cp. Here are the problems with these methods:
tar -- has a 200 character limit on pathnames. You can't copy an arbitrarily deep directory tree or one with very long filenames.
find|cpio -- Doesn't work for filenames with embedded newlines (yes a newline can appear in a filename.) There is also a limit on pathname length, although more than 200 characters. GNU find has -print0 and GNU cpio has -0 to get around the newline problem.
cp -r -- follows symlinks instead of preserving them. Looks like GNU's cp -a solves this problem. I don't know if cp -a preserves special files, however. I also don't know if cp -a has a pathname length limitation. Unix itself limits pathnames to 1024 characters, so if cp does not take care to avoid this limit (by changing directory and using relative pathnames) then cp is unable to copy arbitrarily deep directory trees.
Note the pathname limit in Unix does not prevent you from creating extremely deep directory trees. It's actually quite simple:
i=0 while [ $i -lt 5000 ] do mkdir x cd x i=` expr $i + 1 ` done
So dump|restore is preferable because (I think) it has no pathname length limitations. It preserves symlinks, and special files, and handles any legal filename.
Of course the other methods only fail in unusual circumstances. You can still use them if you check your source directory for problems first.
Steve Losen scl@virginia.edu phone: 804-924-0640
University of Virginia ITC Unix Support