I used to do ~ 10m files (10TB vol) for a roaming profile repository via snapmirror about 5 years ago. Worked like a charm and was 7-mode. We would sm to another filer, and then ndmp dump of the 2ndary vol to a vtl.
On 8/22/16, 9:59 AM, "toasters-bounces@teaparty.net on behalf of Sebastian Goetze" <toasters-bounces@teaparty.net on behalf of spgoetze@gmail.com> wrote:
In 7-Mode, SnapVault logically goes through the whole filesystem to find changed blocks. E.g. dedupe is not 'seen' at all by SnapVault. SnapMirror on the other hand just looks at the blocks and doesn't care how big or small the filesystem is.
In High File Count (HFC) situations (or highly dedupeable data) I always advise to use SnapMirror, if at all possible.
It transfers the data deduped (and compressed, if the source is compressed) and can also compress on the wire (don't do this, if your source data is already compressed...).
And yes, you could continue the replication sequence with SnapVault (e.g local on the secondary, e.g. only weekly's but further back). This could offset the possibly more storage you might need on the primary (e.g. if you don't yet do weekly's at all on the primary at the moment).
My 2c
Sebastian
On 8/22/2016 6:35 PM, Ray Van Dolson wrote: > We wanted to use SnapVault to protect a volume containing 70+ million > files (probably also has around 30TB of data, though it de-dupes down > to less than 6TB). However, it appears that with SnapVault a full file > scan is preformed prior to the block-based replication, and that scan > can take around 24 hours. I'm assuming it will do this on subsequent > differential vaults, though the block transfer part should be much > shorter we'll still need to wait for the file scan to complete. > > As we'd like to "back up" this data at least once a day, would we be > better positioned by using SnapMirror? My belief is that it does *not* > scan all of the files first and simply replicates changed blocks. > > We'd need to keep more snapshots on the source storage to meet our > retention requirements (or maybe further replicate the volume on the > destination side?). > > Thanks, > Ray > _______________________________________________ > Toasters mailing list > Toasters@teaparty.net > http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters