Aleksandr Rainchik wrote:
I have a question: what should be the best way to transfer 20-25Gb of data (lot's of small files) from UNIX to NetApp?
Can I do ufsdump on UNIX, pipe it through rsh to NetApp and do restore there?
The answer is: yes, you can. And despite all the other suggestions, this would be my preferred method, both for performance and for transparency.
My second choice would be ufsdump piped to ufsrestore running on an NFS client.
The formats used by Solaris ufsdump/ufsrestore and ONTAP dump/restore are intended to be compatible. I've had some trouble in the past feeding ONTAP dumps to Solaris ufsrestore (I got spurious errors when restoring certain large files with multiple holes some of which were at odd multiples of 4K), but never with feeding Solaris dumps to ONTAP restore.
The preservation of holes is one of the advantages of using this method. Of course, one has to allow for holes being at 8K granularity on Solaris ufs and 4K granularity in wafl.
Maybe you haven't got any symbolic links to worry about, but if you have:
1. Solaris ufsrestore restores the owner and group of symlinks [this is only a year or two old: other programs descended from BSD dump may well not do this]. It doesn't restore their time stamps - nor can any copying method based on front-door use of NFS to write the files.
2. ONTAP restore restores the times as well as the owner and group!
Oh, and if you use ONTAP restore into a volume with quota control on, the inode counts can end up wrong (symlinks are counted twice). This is bugid 23326. "quota off" then "quota on" will fix it.
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.