How long did it take to get to your 60% point that you reference?  Some quick math says 30TB at 400MB/s should complete in about 22 hours.  If you're 60 percent done, and you have a day and a half, then it should complete before tomorrow night, right?

On Wed, Aug 8, 2018 at 12:49 PM, Douglas Siggins <siggins@gmail.com> wrote:
I was thinking throttle, but forgot the exact command.

Yes 400 MB/s is typically what I see with 10G.

I'm thinking itll make you cancel the vol move (without an override), but either way just make sure its not finalizing. I've been in situations where vol moves have not cutover properly for one reason or another, and IO on the source stops.


This does mention vetoes ....




On Wed, Aug 8, 2018 at 1:35 PM Ian Ehrenwald <Ian.Ehrenwald@hbgusa.com> wrote:
Hi Douglas
Thanks for writing.  If I am understanding that governor correctly, that is for the number of concurrent moves?  In this specific instance, I'm moving one volume of size 30+ TB, so I don't think it is entirely applicable for the situation.  Definitely correct me if I'm wrong, though.

That being said, when the source aggregate is not under high load, I am able to get 400MB/s or higher in replication throughput which is pretty cool.

There IS a Snapmirror speed throttle that I would bump against occasionally, and that was addressed with a "setflag repl_throttle_enable 0" locally on each node while in diag mode.  That really did make a difference in Snapmirror speed, enough of a difference that Jeffrey Steiner @ NetApp did some poking around internally to see why it's enabled in the first place.  I don't recall the outcome of that poking.

Either way, I guess we'll find out what happens when there's a SFO while a volume move is happening?


________________________________________
From: Douglas Siggins <siggins@gmail.com>
Sent: Wednesday, August 8, 2018 1:11:31 PM
To: Ian Ehrenwald
Cc: toasters@teaparty.net
Subject: Re: volume move and SFO

Ian,

Just a suggestion (its been a while but I think this is how I removed the throttle in 9.1):
volume move governor*> ?
  modify                      *Modify the governor configuration
  show                        *Display the governor configuration
https://community.netapp.com/t5/Data-ONTAP-Discussions/How-many-vol-move-operations-can-be-active-at-same-time/td-p/129331<https://protect-us.mimecast.com/s/_22xC1wpBwSpRxmOSLY-3g?domain=community.netapp.com>

You should be able to move that data in a pretty rapid period of time. I've noticed when upgrading to 9.1 the throttle is definitely more visible -- even when removing the throttle there isn't a noticeable impact.

I would suggest against keeping the vol move running during the takeover, if its even possible.

On Wed, Aug 8, 2018 at 10:33 AM Ian Ehrenwald <Ian.Ehrenwald@hbgusa.com<mailto:Ian.Ehrenwald@hbgusa.com>> wrote:
Good morning
I have a four node cluster, nodes1/2 are SAS/SATA and nodes 3/4 are AFF.  I have a long running volume move going from node 1 to node 4.  Long running, like 30TB+, and it's about 60% done.  I need to do some hardware maintenance on nodes 1 and 2 tomorrow evening (install additional FlashCache cards).  Will a takeover of node 2 by node 1, then a takeover of node 1 by node 2, interrupt this volume move?  I can't seem to find much in the way of documentation about what happens during a SFO and a volume move, but it's possible I'm just not looking hard enough.  Thanks for any insights.


Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948 / ian.ehrenwald@hbgusa.com<mailto:ian.ehrenwald@hbgusa.com>


_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
http://www.teaparty.net/mailman/listinfo/toasters<https://protect-us.mimecast.com/s/Ii7BC2kq1kfkjWG6C1mJ0Y?domain=teaparty.net>

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters