Thanks Sebastian, I actually already checked it with a dry run and it complained about the fact that the volume I'm moving was read-only and snapmirrored.
One think I noticed is that the bandwidth available through the prod interface is a lot faster than the loopback- I did a snapmirror cascade to the other head of another similar sized volume and it was done overnight. What about something like using a connections definition line in the snapmirror.conf to send the traffic through another interface than loopback? Or maybe setting snapmirror.volume.local_nwk_bypass.enable to off?
From: Sebastian Goetze [mailto:spgoetze@gmail.com] Sent: January-18-17 3:49 PM To: BERNTSEN Basil (EXT) ResgGtsInt; toasters@teaparty.net Subject: Re: Issue moving large snapmirror destination
Hi Basil,
why not just 'move' the volume, if it's connected to the same head?
vol move start ndmsrcvol dstaggr [ -k ] [ -m | -r num_cutover_attempts ] [ -w cutover_window] [ -o] [ -d ]
Starts the vol move of volume named ndmsrcvol to the destination aggregate named dstaggr. The execution sequence starts with a series of checks on the controller, source volume, source and destination aggregates. If all the checks are successful, the move starts with Setup phase in which a placeholder volume in the destination aggregate is created, and baseline transfer from source to destination volume initiated. This is followed by Data Copy phase wherein the destination volume requests successive snapmirror updates from source volume to synchronize itself completely with the source volume. Finally the move completes with the cutover phase. By default, vol move will initiate cutover automatically, unless invoked with an optional -m that disables automatic cutover. With the -m option, vol move continues to trigger snapmirror updates from source volume and the user can initiate cutover at any time with the vol move cutover command. The duration of the cutover window can be specified by the -w option. The minimum, default and maximum values for cutover window are respectively 30, 60 and 300. The number of cutover attempts is provided by an optional -r. The minimum, default and maximum values for cutover attempts are 1, 3 and 25. If user has not specified -m option and cutover cannot be completed in the specified number of attempts, vol move will pause. The user may either abort or resume vol move with/without -m option. After successful move, the source volume, by default, will be destroyed unless the move was started with -k option.
Before executing cutover, vol move performs a series of checks, similar to the checks during the initialization phase, to verify that the conditions are favorable to cutover. If any of the checks fail, vol move pauses with an EMS message that indicates the exact reason for pause. The user may wait for the unfavorable event to complete and resume vol move thereafter. The -o option is provided to ignore the redundancy characteristics of aggregates in MetroCluster environment when a vol move is initiated from the mirrored source aggregate to an unmirrored destination aggregate. In other words, without -o option, vol move will not start when the redundancy characteristics of the two aggregates are different and if started with -o option, will pause before entering cutover if the redundancy characteristics of the two aggregates are different. The -d option is used to perform dry run. When issued with this option, vol move sub system only runs a series of checks without starting vol move. Appropriate error messages are displayed in case any checks fail.
It should be way faster than a SnapMirror and you can prep and do the cutover at a specified time...
Hope that helps
Sebastian On 1/18/2017 8:47 PM, BERNTSEN Basil wrote: Hi folks, I'm managing a 7-mode system (8.1) with a snapmirror destination of 35TB. I need to move it to a new aggregate on the same head. I really want avoid rebaselining from prod... I've tried a snapmirror cascaded off the normal destination, however it will likely take 4 days to complete using the internal loopback. While this is running, snapmirror scheduled updates to the normal destination volume don't run. I also tried a vol copy, but that seems to require you to keep a CLI session online for the whole transfer.
Once it is copied with snapshots to the new aggregate, I'm going to change the source in the snapmirror.conf file. Does anyone have any ideas about how I could move this?
Thanks!
Basil
************************************************************************* This message and any attachments (the "message") are confidential, intended solely for the addressee(s), and may contain legally privileged information. Any unauthorized use or dissemination is prohibited. E-mails are susceptible to alteration. Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or falsified. Please visit http://sgasdisclosure.com for important information regarding SG Americas Securities, LLC ("SGAS"). Please visit http://swapdisclosure.sgcib.com for important information regarding swap transactions with SOCIETE GENERALE. *************************************************************************
_______________________________________________
Toasters mailing list
Toasters@teaparty.netmailto:Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
************************************************************************* This message and any attachments (the "message") are confidential, intended solely for the addressee(s), and may contain legally privileged information. Any unauthorized use or dissemination is prohibited. E-mails are susceptible to alteration. Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or falsified. Please visit http://sgasdisclosure.com for important information regarding SG Americas Securities, LLC (“SGAS”). Please visit http://swapdisclosure.sgcib.com for important information regarding swap transactions with SOCIETE GENERALE. *************************************************************************