Once the "start" command is run, do I basically have a working snapmirror set up between the source and target volumes? Can it be tuned for sync time like a regular snapmirror?
Once the "complete" command is run, is the source vfiler destroyed? If so, is there a way to not do that? Ideally I'd like the source to stay where it is but remain stopped.
OK, did some reading in the man pages. Looks like "vfiler dr -s" is more like what I'm picturing, where the target is kept current in realtime to the source. Does anyone know what the performance overhead is like? Can it be throttled? What licensing is needed?
-----Original Message----- From: Gelb, Scott [mailto:sgelb@insightinvestments.com] Sent: Tuesday, March 13, 2012 8:08 AM To: Randy Rue Cc: toasters@teaparty.net Subject: RE: migrate a vfiler between different HA clusters?
Yes... -nocopy was formerly called SnapMover but is now included without a license. Without -nocopy, SnapMirror is used. The target is higher ONTAP and meets the SnapMirror rules for the mirror. There is no guarantee on the time to cutover, but it will work. I would test the time it takes to SnapMirror update every volume and keep the mirrors up to date as close as possible...then when idle with a low lag, run the vfiler migrate complete.
-----Original Message----- From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Randy Rue Sent: Monday, March 12, 2012 4:09 PM Cc: toasters@teaparty.net Subject: Re: migrate a vfiler between different HA clusters?
OK, did a little reading. I've used vfiler migrate in the past but with the -nocopy switch and between nodes of the same HA cluster. I also didn't use the start and complete arguments.
Looks like if I set up target volumes with the same names as the source, and have snapmirror licensed on both clusters, this might work. Will it work between a 7.3.5 source and a 8.0.2 target?
The destination is on a new subnet. When I run the complete argument will the vfiler come up OK but with bad network settings?
----- Original Message ----- From: "Randy Rue" rrue@fhcrc.org Cc: toasters@teaparty.net Sent: Monday, March 12, 2012 3:45:24 PM Subject: Re: migrate a vfiler between different HA clusters?
vfiler migrate works between separate physical NetApp clusters? In different datacenters? How long would the outage be to move a 10TB vfiler?
----- Original Message ----- From: "Scott Gelb" sgelb@insightinvestments.com To: "Randy Rue" rrue@fhcrc.org, toasters@teaparty.net Sent: Monday, March 12, 2012 3:18:05 PM Subject: RE: migrate a vfiler between different HA clusters?
You can use "vfiler migrate" from the command line. It does not guarantee a 120 second failover like Data Motion for vFilers but that will create the target vFiler. You have to precreate the volumes and ipspaces on the target system first then issue the commands on the target. With ONTAP 8.1 (not GA yet) it does support different sized NVRAM (bigger to smaller) as well as faster to slower disk technologies (for example SAS to SATA) where 7.3.x did not support that as you mentioned below.
Your choices are vfiler migrate manually, or wait to upgrade to 8.1 (assuming you meet the other data motion criteria besides nvram).
-----Original Message----- From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Randy Rue Sent: Monday, March 12, 2012 3:01 PM To: toasters@teaparty.net Subject: migrate a vfiler between different HA clusters?
Hello All,
We have a vfiler "x" currently sitting on one node of a v3170 HA pair. We want to move it to a new 3240 HA pair.
Looks like the automagic tools that NetApp offers for this can't work for us because a) the destination filer has less NVRAM than the source, and b) the source is at 7.3.5 and the destination 8.0.2.
We could create a new vfiler on the destination and snapmirror every volume. But we don't use qtrees. There are default qtrees on the dozen or two volumes on the source vfiler, but these can't be snapmirrored to the destinations as (if I understand this correctly) directly as the snapmirror process first deletes the destination qtree and then recreates it. And if you delete the default qtree (qtree0) for a volume you delete the volume. In other words, this path would require we create a qtree below the default and end up with an added folder in the path of the copied destination folder, and need to touch every client that connects to the export.
The new HA pair is in a different subnet than the source. So the IP address of the vfiler will change. But ideally nothing else should.
Anybody got any ideas for how to do this short of creating a new vfiler from scratch and using rsync to move all the data via NFS? RE-assigning DNS names and AD memberships during a switchover outage?
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters