Jordan,

for quite some time now, if you clone with the 'clone' command, you will create sis-clones, that do not depend on a snapshot. (As opposed to 'lun clone' or 'vol clone'...) Therefore no splitting necessary, neither will you run into the 255 limit.

OTOH, since you'd have to clone file by file (there's no directory cloning...) it could get tedious and you might run into the whole WAN traffic problem.
It might just be faster to 'ndmpcopy' the directory (1 command, executed on the filer, negligible WAN traffic), perhaps followed by a dedupe run to regain the space. It would have (at least) the same space effect as the file cloning, but way less network traffic, and might therefore be not only simpler, but also faster.

My 2c


On 10/21/2013 8:15 PM, Jordan Slingerland wrote:

It sounds to me that is the best tool for the job.

 

The only gotcha I can think of is these clones either need to be deleted or split at some point , or they will just grow indefinitely with changes. Also, you are limited to 255 snapshots per volume and each flexclone is going to use a snapshot.

 

 

 

 

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Arnold de Leon
Sent: Friday, October 18, 2013 8:30 PM
To: toasters
Subject: Re: Slow copy of a directory full of files via an NFS client across a WAN

 

Summary

 

It appears that our best bet is to use the Netapp APIs and the "clone" commands.  This would be most efficient, the data never leaves the filer.   The "copies" don't take any additional space (except for the meta data) until they get modified.

 

We need to do a little more research and testing.

 

Thanks.



_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters