Here’s a repost, with a link to the TR rather than the actual pdf.
https://www.netapp.com/us/media/tr-4075.pdf
Francis Kim Cell: 415-606-2525 Direct: 510-644-1599 x334 fkim@berkcom.commailto:fkim@berkcom.com www.berkcom.com
On Aug 11, 2018, at 12:02 PM, Francis Kim <fkim@BERKCOM.commailto:fkim@BERKCOM.com> wrote:
Ian, <tr-4075 DataMotion for Volumes NetApp clustered Data ONTAP 8.2 and 8.3.pdf> With recurring controller and shelf refreshes, I’ve become a frequent vol mover (and a fan of it) over the last two years. NDO aspect of this task is obviously a big draw.
Attached TR-4075, dated March 2015, has the most detailed explanation of vol moves I’ve been able to find, but its discussion is limited to 8.2 vs 8.3. Nothing about ONTAP 9 in this TR, so YMMV.
In this TR much is made about the two phases of a vol move, the iterative (baseline and updates) and cutover, with respective to what other operations are (in)compatible with the progress of a vol move.
During the iterative phase, a vol move in 8.2 would have to be restarted after a FO/GB while in 8.3 they would resume at its most recent checkpoint.
However, once the cutover phase has been entered, a vol move in 8.2 would survive a FO/GB if it had crossed its “point of no return” checkpoint, while an 8.3 vol move is mutually exclusive with a FO/GB, suggesting a cutover would have to be reattempted afterward.
I’ve not been able to find under 9.1 (even in diag mode) any information specific to these checkpoints. Not sure whether “Bytes sent” is a checkpoint.
Since documentation on vol moves is generally skinny and this TR is now over three years and five releases old, a lab run is probably a good move if you have access to gear.
Francis Kim Cell: 415-606-2525 Direct: 510-644-1599 x334 fkim@berkcom.commailto:fkim@berkcom.com www.berkcom.comhttp://www.berkcom.com
On Aug 10, 2018, at 2:55 AM, Ian Ehrenwald <Ian.Ehrenwald@hbgusa.commailto:Ian.Ehrenwald@hbgusa.com> wrote:
Based on a Justin Parisi article that Douglas pointed out earlier in the thread, I'm going with the assumption that the cluster would veto the giveback if a move was in progress.
I actually just got a new FAS2720 HA pair for my lab this week, so when it's set up and I have a bunch of junk data and volumes on it for testing, I will see what happens if we were to do a move, takeover, and giveback in Real Life (tm). I can't make any promises on an ETA for answer, but as soon as I do it I'll send an update.
________________________________________ From: RUIS Henk <henk.ruis@axians.commailto:henk.ruis@axians.com> Sent: Friday, August 10, 2018 4:28:30 AM To: Ian Ehrenwald; Mike Gossett Cc: Toasters Subject: RE: volume move and SFO
Hi,
Great, but now we still don't know if it's possible to move a volume during maintenance ;-(
Met vriendelijke groet / Kind regards,
Henk Ruis Technical Consultant
-----Oorspronkelijk bericht----- Van: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net <toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net> Namens Ian Ehrenwald Verzonden: donderdag 9 augustus 2018 18:13 Aan: Mike Gossett <cmgossett@gmail.commailto:cmgossett@gmail.com> CC: Toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Onderwerp: Re: volume move and SFO
Good afternoon Update, as promised. The volume move completed early this morning so there will be no conflict with our maintenance tonight. Inline compaction is AWESOME - it has saved us over 16TB on this new SSD aggregate. Crazy stuff.
________________________________________ From: Mike Gossett <cmgossett@gmail.commailto:cmgossett@gmail.com> Sent: Wednesday, August 8, 2018 2:37:35 PM To: Ian Ehrenwald Cc: Douglas Siggins; Toasters Subject: Re: volume move and SFO
Hi Ian,
The good news is that, forgetting about what it is estimating, we've seen that in 24 hours 21TB has been copied and. Hopefully another 30 hours or whatever is sufficient for the remaining 9.5TB - thanks for sharing. I'm interested to know the result of the SFO - but if was me I'd try to push the maintenance back to be safe. Or open a ticket with support and see if they can tell you what to expect
On Aug 8, 2018, at 1:29 PM, Ian Ehrenwald <Ian.Ehrenwald@hbgusa.commailto:Ian.Ehrenwald@hbgusa.com> wrote:
Hi Mike Somewhat redacted output from a few minutes ago:
MyCluster1::> volume move show -instance
Vserver Name: mySvm Volume Name: aVeryLargeVolume Actual Completion Time: - Bytes Remaining: 9.44TB Destination Aggregate: aggr_ssd_3800g_c1n4 Detailed Status: Transferring data: 20.96TB sent. Estimated Time of Completion: Thu Aug 09 01:58:40 2018 Managing Node: Clus1-Node1 Percentage Complete: 68% Move Phase: replicating Estimated Remaining Duration: 11:45:25 Replication Throughput: 233.9MB/s Duration of Move: 23:47:43 Source Aggregate: aggr_sas_600g_c1n1 Start Time of Move: Tue Aug 07 14:25:38 2018 Move State: healthy Is Source Volume Encrypted: false Encryption Key ID of Source Volume: - Is Destination Volume Encrypted: false Encryption Key ID of Destination Volume: -
MyCluster1::>
Depending on source filer load, I've seen Replication Throughput anywhere from 25MB/s through 400MB/s and higher. The source aggregate is 192x600GB SAS on a filer with 2TB FlashCache and it sometimes sees periods of 30K IOPS. I guess the point is ETA has been anywhere from just about now all the way out to this coming Sunday. There's a good chance that the move will complete before tomorrow evening according to these numbers though. If it doesn't complete on time, I guess we'll find out what effect, if any, SFO has (unless I can easily reschedule this HW maintenance).
________________________________________ From: Mike Gossett <cmgossett@gmail.commailto:cmgossett@gmail.com> Sent: Wednesday, August 8, 2018 13:59 To: Douglas Siggins Cc: Ian Ehrenwald; Toasters Subject: Re: volume move and SFO
How long did it take to get to your 60% point that you reference? Some quick math says 30TB at 400MB/s should complete in about 22 hours. If you're 60 percent done, and you have a day and a half, then it should complete before tomorrow night, right?
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toastershttps://protect-us.mimecast.com/s/bF00ClYkMYUo703Os9r2z6?domain=teaparty.net
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters