Based on a Justin Parisi article that Douglas pointed out earlier in the thread, I'm going with the assumption that the cluster would veto the giveback if a move was in progress.
I actually just got a new FAS2720 HA pair for my lab this week, so when it's set up and I have a bunch of junk data and volumes on it for testing, I will see what happens if we were to do a move, takeover, and giveback in Real Life (tm). I can't make any promises
on an ETA for answer, but as soon as I do it I'll send an update.
________________________________________
From: RUIS Henk <
henk.ruis@axians.com>
Sent: Friday, August 10, 2018 4:28:30 AM
To: Ian Ehrenwald; Mike Gossett
Cc: Toasters
Subject: RE: volume move and SFO
Hi,
Great, but now we still don't know if it's possible to move a volume during maintenance ;-(
Met vriendelijke groet / Kind regards,
Henk Ruis
Technical Consultant
-----Oorspronkelijk bericht-----
Van:
toasters-bounces@teaparty.net <
toasters-bounces@teaparty.net> Namens Ian Ehrenwald
Verzonden: donderdag 9 augustus 2018 18:13
Aan: Mike Gossett <
cmgossett@gmail.com>
CC: Toasters <
toasters@teaparty.net>
Onderwerp: Re: volume move and SFO
Good afternoon
Update, as promised. The volume move completed early this morning so there will be no conflict with our maintenance tonight. Inline compaction is AWESOME - it has saved us over 16TB on this new SSD aggregate. Crazy stuff.
________________________________________
From: Mike Gossett <
cmgossett@gmail.com>
Sent: Wednesday, August 8, 2018 2:37:35 PM
To: Ian Ehrenwald
Cc: Douglas Siggins; Toasters
Subject: Re: volume move and SFO
Hi Ian,
The good news is that, forgetting about what it is estimating, we've seen that in 24 hours 21TB has been copied and. Hopefully another 30 hours or whatever is sufficient for the remaining 9.5TB - thanks for sharing. I'm interested to know the result of the
SFO - but if was me I'd try to push the maintenance back to be safe. Or open a ticket with support and see if they can tell you what to expect
On Aug 8, 2018, at 1:29 PM, Ian Ehrenwald <Ian.Ehrenwald@hbgusa.com> wrote:
Hi Mike
Somewhat redacted output from a few minutes ago:
MyCluster1::> volume move show -instance
Vserver Name: mySvm
Volume Name: aVeryLargeVolume
Actual Completion Time: -
Bytes Remaining: 9.44TB
Destination Aggregate: aggr_ssd_3800g_c1n4
Detailed Status: Transferring data: 20.96TB sent.
Estimated Time of Completion: Thu Aug 09 01:58:40 2018
Managing Node: Clus1-Node1
Percentage Complete: 68%
Move Phase: replicating
Estimated Remaining Duration: 11:45:25
Replication Throughput: 233.9MB/s
Duration of Move: 23:47:43
Source Aggregate: aggr_sas_600g_c1n1
Start Time of Move: Tue Aug 07 14:25:38 2018
Move State: healthy
Is Source Volume Encrypted: false
Encryption Key ID of Source Volume: -
Is Destination Volume Encrypted: false Encryption Key ID of
Destination Volume: -
MyCluster1::>
Depending on source filer load, I've seen Replication Throughput anywhere from 25MB/s through 400MB/s and higher. The source aggregate is 192x600GB SAS on a filer with 2TB FlashCache and it sometimes sees periods of 30K IOPS. I guess the point is ETA has been
anywhere from just about now all the way out to this coming Sunday. There's a good chance that the move will complete before tomorrow evening according to these numbers though. If it doesn't complete on time, I guess we'll find out what effect, if any, SFO
has (unless I can easily reschedule this HW maintenance).
________________________________________
From: Mike Gossett <cmgossett@gmail.com>
Sent: Wednesday, August 8, 2018 13:59
To: Douglas Siggins
Cc: Ian Ehrenwald; Toasters
Subject: Re: volume move and SFO
How long did it take to get to your 60% point that you reference? Some quick math says 30TB at 400MB/s should complete in about 22 hours. If you're 60 percent done, and you have a day and a half, then it should complete before tomorrow night, right?
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters<https://protect-us.mimecast.com/s/bF00ClYkMYUo703Os9r2z6?domain=teaparty.net>
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters