I had something like this a while back.
I ended up just creating a new 1GB volume and the converting it to the root vol (volume make-vsroot)
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Sun, May 16, 2021 at 8:05 PM John Stoffel john@stoffel.org wrote:
The two nodes came up ok, the problem is the svms with a root ok on the missing pair of nodes. Escalating with netapp.
Not sure we have the time to wait for elevator repair to bring the missing pair to the new location, ten hours by truck. Been a hellosh weekend.
I wonder if pod sharing rootvols would have saved us some pain? Maybe not since updates wouldn’t be possible.
Sent from my iPhone
On May 16, 2021, at 2:18 AM, andrei.borzenkov@fujitsu.com wrote:
You can only bring up two nodes if one of them has epsilon. Otherwise no configuration changes are possible and that includes breaking snap mirror.
If those two nodes do have epsilon, it is just normal procedure to failover LIF. After snap mirror break destinations volumes are not renamed
- they remain exactly as they are. Nothing special needs to be done when
two remaining nodes arrive - they just join cluster normally.
If these two nodes do not have epsilon, your best bet is to wait for the remaining cluster nodes to arrive. I am not aware of possibility to force epsilon in this case. May be it exists, but you certainly need support case to obtain it and any follow up step.
Скачайте Outlook для iOS https://aka.ms/o0ukef
*От:* Пользователь Toasters toasters-bounces@teaparty.net от имени пользователя John Stoffel john@stoffel.org *Отправлено:* воскресенье, мая 16, 2021 04:54 *Кому:* Toasters@teaparty.net *Тема:* 4 node cluster - only two nodes coming up
Hi all,
We're in the middle of hell, where our 4-node FAS8060 cluster was shutdown cleanly for a move, but only one pair made it onto the truck to the new DC. Luckily I have all the volumes snapmirrored between the two pairs of nodes and their aggregates.
But now I need to bring up the pair that made the trip, figure out which mirrors are source and which are destination on this pair, and then break the destination ones so I can promote them to read-write.
This is not something I've practiced, and I wonder that if I have volume foo, mounted on /foo, and it's snapmirror is volume foo_sm, when I do the break, will it automatically mount to /foo? I guess I'll find out later tonight, and I can just unmount and remount.
I think this is all good with just a simple 'snapmirror break ...' but then when we get the chance to rejoin the other two nodes into the cluster down the line, I would asusme I just have to (maybe) wipe the old nodes and rejoin them one at a time. Mostly because by that point I can't have the original source volumes come up and cause us to lose all the writes that have happened on the now writeable destination volumes.
And of course there's the matter of getting epsilon back up and working on the two node cluster when I reboot it. Along with all the LIFs, etc. Not going to be a fun time. Not at all...
And of course we're out of support with Netapp. Sigh...
And who knows if the pair that came down won't lose some disks and end up losing one or more aggregates as well. Stressful times for sure.
So I'm just venting here, but any suggestions or tricks would be helpful.
And of course I'm not sure if the cluster switches made it down here yet.
Never put your DC on the second floor if there isn't a second freight elevator. Or elevator in general. Sigh...
John _______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters