We have a FAS3250 that's primarily backup storage. It hosts a lot of snapmirror targets, and a few mounted filesystems with backup data.
As the hardware is getting old, we've purchased a shiny new FAS8200 with a couple of 8TB drive shelves.
We assume that the FAS8200 can be initialized on 9.2 and above using the ADP on external drives feature, to partition those 8T disks, so we don't have to throw away about 50T of storage just on the root aggregates.
However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP.
And you can't initialize first and partition later, because repartitioning wipes all existing data.
That's our dilemma.
I've actually tried to "manually" partition the disks, by going into maintenance mode and using "disk partition" to force the 8200 nodes to see only partitioned disks, and then re-initializing them using boot menu "4 - clean config and reinitialize disks". That fails, and 9.1 doesn't want to write a root FS to partitioned disks (it does wipe them, though). Besides, it would likely be unsupported. I did learn some interesting things about the "disk partition" command, by the way, like the "-b" blocksize is in 4k blocks, and the '-i' option that numbers partitions starts at 1, which is a data partition, and number 2 is the root partition, at least in the 2-partition root/data setup.
So we either have the option to throw away a large chunk of storage for root aggregates, initialize the new nodes on 9.1, join in cluster, and move the existing data using 'vol move' and all the cDOT goodness that comes with it.
... or build the new 8200 as a separate cluster, initialize it with 9.3, partition the disks via the boot menu, and then move the data over using snapmirror, and remounting the clients. That's doable because most of the data is snapmirror target anyway, and there's a limited number of mounted filesystems that would need a remount. It's a shame that cDOT doesn't have "snapmirror migrate" like 7mode did.
Does anyone have any other options? All I could think is get swing gear and basically do the migration twice, first to hardware that supports ontap >= 9.2, then to our new 8200. But I'm not willing to spend a lot of money renting the swing gear and getting a lot of extra setup work basically because of a flaw in netapp software.
Thanks,
here is an idea...not necessarily supported, but an idea:
1. Install 9.2P1 on your FAS8200 controllers. 2. Initialize them with the Root-Data Partitioning Here is the not necessarily supported part: 3. If you do not have a CN1610 or supported stand-alone cluster swithces -> temporarily utilize a 10-gig switch and convert the 3250 from switchless to switched. 4. I think you might be able to add the FAS8200s into the cluster, they will operate like 9.1 controllers. 5. If they get moved in, you can then vol move the snapmirror volumes in 6. After they are all moved, remove the FAS3250 nodes from the cluster 7. Convert the switched cluster back to a switchless cluster.
again: This process is probably NOT SUPPORTED, but if you are in a pinch, it may work. It might be worth opening a case to see if you can temporarily add 9.2 to a 9.1 cluster.
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Mon, Nov 27, 2017 at 10:46 AM, Jan-Pieter Cornet johnpc@xs4all.nl wrote:
We have a FAS3250 that's primarily backup storage. It hosts a lot of snapmirror targets, and a few mounted filesystems with backup data.
As the hardware is getting old, we've purchased a shiny new FAS8200 with a couple of 8TB drive shelves.
We assume that the FAS8200 can be initialized on 9.2 and above using the ADP on external drives feature, to partition those 8T disks, so we don't have to throw away about 50T of storage just on the root aggregates.
However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP.
And you can't initialize first and partition later, because repartitioning wipes all existing data.
That's our dilemma.
I've actually tried to "manually" partition the disks, by going into maintenance mode and using "disk partition" to force the 8200 nodes to see only partitioned disks, and then re-initializing them using boot menu "4 - clean config and reinitialize disks". That fails, and 9.1 doesn't want to write a root FS to partitioned disks (it does wipe them, though). Besides, it would likely be unsupported. I did learn some interesting things about the "disk partition" command, by the way, like the "-b" blocksize is in 4k blocks, and the '-i' option that numbers partitions starts at 1, which is a data partition, and number 2 is the root partition, at least in the 2-partition root/data setup.
So we either have the option to throw away a large chunk of storage for root aggregates, initialize the new nodes on 9.1, join in cluster, and move the existing data using 'vol move' and all the cDOT goodness that comes with it.
... or build the new 8200 as a separate cluster, initialize it with 9.3, partition the disks via the boot menu, and then move the data over using snapmirror, and remounting the clients. That's doable because most of the data is snapmirror target anyway, and there's a limited number of mounted filesystems that would need a remount. It's a shame that cDOT doesn't have "snapmirror migrate" like 7mode did.
Does anyone have any other options? All I could think is get swing gear and basically do the migration twice, first to hardware that supports ontap
= 9.2, then to our new 8200. But I'm not willing to spend a lot of money
renting the swing gear and getting a lot of extra setup work basically because of a flaw in netapp software.
Thanks,
-- Jan-Pieter Cornet johnpc@xs4all.nl "Any sufficiently advanced incompetence is indistinguishable from malice." - Grey's Law
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Another idea (if you have the space)
Setup the 8200 to match the 3250 (burning the 6 disks for the root aggregates). Don’t use the rest of that shelf. Join the cluster, move everything to the 8200 Unjoin the 3250 Upgrade the 8200 to 9.2 or 9.3 Take one of the shelves from the 3250 and add it to the 8200. Move the root aggregates/volumes to the shelf from the 3250 manually do the ADP on the first shelf and move the root volumes back** remove the 3250 shelf
**never done this for root aggregates so this may need some testing/research
m
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: Monday, November 27, 2017 10:03 AM To: Jan-Pieter Cornet johnpc@xs4all.nl Cc: Toasters@teaparty.net Subject: Re: Chicken/egg dillema doing a hardware upgrade on a FAS3250
here is an idea...not necessarily supported, but an idea:
1. Install 9.2P1 on your FAS8200 controllers. 2. Initialize them with the Root-Data Partitioning Here is the not necessarily supported part: 3. If you do not have a CN1610 or supported stand-alone cluster swithces -> temporarily utilize a 10-gig switch and convert the 3250 from switchless to switched. 4. I think you might be able to add the FAS8200s into the cluster, they will operate like 9.1 controllers. 5. If they get moved in, you can then vol move the snapmirror volumes in 6. After they are all moved, remove the FAS3250 nodes from the cluster 7. Convert the switched cluster back to a switchless cluster.
again: This process is probably NOT SUPPORTED, but if you are in a pinch, it may work. It might be worth opening a case to see if you can temporarily add 9.2 to a 9.1 cluster.
--tmac
Tim McCarthy, Principal Consultant
Proud Member of the #NetAppATeamhttps://twitter.com/NetAppATeam
I Blog at TMACsRackhttps://tmacsrack.wordpress.com/
On Mon, Nov 27, 2017 at 10:46 AM, Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> wrote: We have a FAS3250 that's primarily backup storage. It hosts a lot of snapmirror targets, and a few mounted filesystems with backup data.
As the hardware is getting old, we've purchased a shiny new FAS8200 with a couple of 8TB drive shelves.
We assume that the FAS8200 can be initialized on 9.2 and above using the ADP on external drives feature, to partition those 8T disks, so we don't have to throw away about 50T of storage just on the root aggregates.
However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP.
And you can't initialize first and partition later, because repartitioning wipes all existing data.
That's our dilemma.
I've actually tried to "manually" partition the disks, by going into maintenance mode and using "disk partition" to force the 8200 nodes to see only partitioned disks, and then re-initializing them using boot menu "4 - clean config and reinitialize disks". That fails, and 9.1 doesn't want to write a root FS to partitioned disks (it does wipe them, though). Besides, it would likely be unsupported. I did learn some interesting things about the "disk partition" command, by the way, like the "-b" blocksize is in 4k blocks, and the '-i' option that numbers partitions starts at 1, which is a data partition, and number 2 is the root partition, at least in the 2-partition root/data setup.
So we either have the option to throw away a large chunk of storage for root aggregates, initialize the new nodes on 9.1, join in cluster, and move the existing data using 'vol move' and all the cDOT goodness that comes with it.
... or build the new 8200 as a separate cluster, initialize it with 9.3, partition the disks via the boot menu, and then move the data over using snapmirror, and remounting the clients. That's doable because most of the data is snapmirror target anyway, and there's a limited number of mounted filesystems that would need a remount. It's a shame that cDOT doesn't have "snapmirror migrate" like 7mode did.
Does anyone have any other options? All I could think is get swing gear and basically do the migration twice, first to hardware that supports ontap >= 9.2, then to our new 8200. But I'm not willing to spend a lot of money renting the swing gear and getting a lot of extra setup work basically because of a flaw in netapp software.
Thanks,
-- Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> "Any sufficiently advanced incompetence is indistinguishable from malice." - Grey's Law
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
I just wanted to mention, that since ONTAP 9.0, there's the possibility to migrate root:
*system node migrate-root* Start the root aggregate migration on a node Availability: This command is available to cluster administrators at the *advanced *privilege level. Description The system node migrate-root command migrates the root aggregate of a node to a different set of disks. You need to specify the node name and the list of disks on which the new root aggregate will be created. The command starts a job that backs up the node configuration, creates a new aggregate, set it as new root aggregate, restores the node configuration and restores the names of original aggregate and volume. The job might take as long as a few hours depending on time it takes for zeroing the disks, rebooting the node and restoring the node configuration.
Keep us updated...
Good luck
On 17/11/27 7:20 PM, Weber, Mark A wrote:
Another idea (if you have the space)
Setup the 8200 to match the 3250 (burning the 6 disks for the root aggregates).
Don’t use the rest of that shelf.
Join the cluster, move everything to the 8200
Unjoin the 3250
Upgrade the 8200 to 9.2 or 9.3
Take one of the shelves from the 3250 and add it to the 8200.
Move the root aggregates/volumes to the shelf from the 3250
manually do the ADP on the first shelf and move the root volumes back**
remove the 3250 shelf
**never done this for root aggregates so this may need some testing/research
m
*From:* toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Monday, November 27, 2017 10:03 AM *To:* Jan-Pieter Cornet johnpc@xs4all.nl *Cc:* Toasters@teaparty.net *Subject:* Re: Chicken/egg dillema doing a hardware upgrade on a FAS3250
here is an idea...not necessarily supported, but an idea:
Install 9.2P1 on your FAS8200 controllers.
Initialize them with the Root-Data Partitioning
Here is the not necessarily supported part:
- If you do not have a CN1610 or supported stand-alone cluster swithces
-> temporarily utilize a 10-gig switch and convert the 3250 from switchless to switched.
- I think you might be able to add the FAS8200s into the cluster,
they will operate like 9.1 controllers.
If they get moved in, you can then vol move the snapmirror volumes in
After they are all moved, remove the FAS3250 nodes from the cluster
Convert the switched cluster back to a switchless cluster.
again: This process is probably NOT SUPPORTED, but if you are in a pinch, it may work.
It might be worth opening a case to see if you can temporarily add 9.2 to a 9.1 cluster.
--tmac
*Tim McCarthy, */Principal Consultant/
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at **TMACsRack https://tmacsrack.wordpress.com/*
On Mon, Nov 27, 2017 at 10:46 AM, Jan-Pieter Cornet <johnpc@xs4all.nl mailto:johnpc@xs4all.nl> wrote:
We have a FAS3250 that's primarily backup storage. It hosts a lot of snapmirror targets, and a few mounted filesystems with backup data. As the hardware is getting old, we've purchased a shiny new FAS8200 with a couple of 8TB drive shelves. We assume that the FAS8200 can be initialized on 9.2 and above using the ADP on external drives feature, to partition those 8T disks, so we don't have to throw away about 50T of storage just on the root aggregates. However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP. And you can't initialize first and partition later, because repartitioning wipes all existing data. That's our dilemma. I've actually tried to "manually" partition the disks, by going into maintenance mode and using "disk partition" to force the 8200 nodes to see only partitioned disks, and then re-initializing them using boot menu "4 - clean config and reinitialize disks". That fails, and 9.1 doesn't want to write a root FS to partitioned disks (it does wipe them, though). Besides, it would likely be unsupported. I did learn some interesting things about the "disk partition" command, by the way, like the "-b" blocksize is in 4k blocks, and the '-i' option that numbers partitions starts at 1, which is a data partition, and number 2 is the root partition, at least in the 2-partition root/data setup. So we either have the option to throw away a large chunk of storage for root aggregates, initialize the new nodes on 9.1, join in cluster, and move the existing data using 'vol move' and all the cDOT goodness that comes with it. ... or build the new 8200 as a separate cluster, initialize it with 9.3, partition the disks via the boot menu, and then move the data over using snapmirror, and remounting the clients. That's doable because most of the data is snapmirror target anyway, and there's a limited number of mounted filesystems that would need a remount. It's a shame that cDOT doesn't have "snapmirror migrate" like 7mode did. Does anyone have any other options? All I could think is get swing gear and basically do the migration twice, first to hardware that supports ontap >= 9.2, then to our new 8200. But I'm not willing to spend a lot of money renting the swing gear and getting a lot of extra setup work basically because of a flaw in netapp software. Thanks, -- Jan-Pieter Cornet <johnpc@xs4all.nl <mailto:johnpc@xs4all.nl>> "Any sufficiently advanced incompetence is indistinguishable from malice." - Grey's Law _______________________________________________ Toasters mailing list Toasters@teaparty.net <mailto:Toasters@teaparty.net> http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
On 27-11-17 19:20, Weber, Mark A wrote:
manually do the ADP on the first shelf and move the root volumes back**
I'm not sure that's easy to do. I could only find ADP via the boot menu, and that wipes all data.
(Fortunately, joining newer version nodes in the cluster worked, see other post).
Hi,
It is possible to temporarily have a cluster with mixed versions such as 9.1 with 9.2. You should check with Support about your specific combination, and the capabilities you need to complete the transition (like moving the SnapMirror targets).
This is from the 9.1 Upgrade Express Guide:
Mixed version requirements ONTAP clusters can operate for a limited time in a mixed version state, in which HA pairs in a cluster are running ONTAP versions from different releases. However, the upgrade is not complete until all HA pairs are running the new target release. When the cluster is in a mixed version state, you should not enter any commands that alter the cluster operation or configuration except as necessary to satisfy the upgrade requirements. You should complete the upgrade as quickly as possible; do not allow the cluster to remain in a mixed version state longer than necessary. An HA pair must not run an ONTAP version from a release that is different from other HA pairs in the cluster for more than seven days.
Beginning 9.3 this is not supported by default, but there is an advanced privilege command to allow mixed versions.
Regards,
---Karl
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: Monday, November 27, 2017 8:03 AM To: Jan-Pieter Cornet johnpc@xs4all.nl Cc: Toasters@teaparty.net Subject: Re: Chicken/egg dillema doing a hardware upgrade on a FAS3250
here is an idea...not necessarily supported, but an idea:
1. Install 9.2P1 on your FAS8200 controllers. 2. Initialize them with the Root-Data Partitioning Here is the not necessarily supported part: 3. If you do not have a CN1610 or supported stand-alone cluster swithces -> temporarily utilize a 10-gig switch and convert the 3250 from switchless to switched. 4. I think you might be able to add the FAS8200s into the cluster, they will operate like 9.1 controllers. 5. If they get moved in, you can then vol move the snapmirror volumes in 6. After they are all moved, remove the FAS3250 nodes from the cluster 7. Convert the switched cluster back to a switchless cluster.
again: This process is probably NOT SUPPORTED, but if you are in a pinch, it may work. It might be worth opening a case to see if you can temporarily add 9.2 to a 9.1 cluster.
--tmac
Tim McCarthy, Principal Consultant
Proud Member of the #NetAppATeamhttps://twitter.com/NetAppATeam
I Blog at TMACsRackhttps://tmacsrack.wordpress.com/
On Mon, Nov 27, 2017 at 10:46 AM, Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> wrote: We have a FAS3250 that's primarily backup storage. It hosts a lot of snapmirror targets, and a few mounted filesystems with backup data.
As the hardware is getting old, we've purchased a shiny new FAS8200 with a couple of 8TB drive shelves.
We assume that the FAS8200 can be initialized on 9.2 and above using the ADP on external drives feature, to partition those 8T disks, so we don't have to throw away about 50T of storage just on the root aggregates.
However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP.
And you can't initialize first and partition later, because repartitioning wipes all existing data.
That's our dilemma.
I've actually tried to "manually" partition the disks, by going into maintenance mode and using "disk partition" to force the 8200 nodes to see only partitioned disks, and then re-initializing them using boot menu "4 - clean config and reinitialize disks". That fails, and 9.1 doesn't want to write a root FS to partitioned disks (it does wipe them, though). Besides, it would likely be unsupported. I did learn some interesting things about the "disk partition" command, by the way, like the "-b" blocksize is in 4k blocks, and the '-i' option that numbers partitions starts at 1, which is a data partition, and number 2 is the root partition, at least in the 2-partition root/data setup.
So we either have the option to throw away a large chunk of storage for root aggregates, initialize the new nodes on 9.1, join in cluster, and move the existing data using 'vol move' and all the cDOT goodness that comes with it.
... or build the new 8200 as a separate cluster, initialize it with 9.3, partition the disks via the boot menu, and then move the data over using snapmirror, and remounting the clients. That's doable because most of the data is snapmirror target anyway, and there's a limited number of mounted filesystems that would need a remount. It's a shame that cDOT doesn't have "snapmirror migrate" like 7mode did.
Does anyone have any other options? All I could think is get swing gear and basically do the migration twice, first to hardware that supports ontap >= 9.2, then to our new 8200. But I'm not willing to spend a lot of money renting the swing gear and getting a lot of extra setup work basically because of a flaw in netapp software.
Thanks,
-- Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> "Any sufficiently advanced incompetence is indistinguishable from malice." - Grey's Law
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Offtopic here, but if it is not supported to run clusters in a mixed state with 9.3, how would you update a 4 node cluster then? Will the shiny `cluster image update` wizard automagically take care of the secret flags then or will it update one node per HA pair simultaneously?
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Konnerth, Karl Gesendet: Montag, 27. November 2017 20:05 An: tmac tmacmd@gmail.com; Jan-Pieter Cornet johnpc@xs4all.nl Cc: Toasters@teaparty.net Betreff: RE: Chicken/egg dillema doing a hardware upgrade on a FAS3250
Hi,
It is possible to temporarily have a cluster with mixed versions such as 9.1 with 9.2. You should check with Support about your specific combination, and the capabilities you need to complete the transition (like moving the SnapMirror targets).
This is from the 9.1 Upgrade Express Guide:
Mixed version requirements ONTAP clusters can operate for a limited time in a mixed version state, in which HA pairs in a cluster are running ONTAP versions from different releases. However, the upgrade is not complete until all HA pairs are running the new target release. When the cluster is in a mixed version state, you should not enter any commands that alter the cluster operation or configuration except as necessary to satisfy the upgrade requirements. You should complete the upgrade as quickly as possible; do not allow the cluster to remain in a mixed version state longer than necessary. An HA pair must not run an ONTAP version from a release that is different from other HA pairs in the cluster for more than seven days.
Beginning 9.3 this is not supported by default, but there is an advanced privilege command to allow mixed versions.
Regards,
---Karl
From: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: Monday, November 27, 2017 8:03 AM To: Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> Cc: Toasters@teaparty.netmailto:Toasters@teaparty.net Subject: Re: Chicken/egg dillema doing a hardware upgrade on a FAS3250
here is an idea...not necessarily supported, but an idea:
1. Install 9.2P1 on your FAS8200 controllers. 2. Initialize them with the Root-Data Partitioning Here is the not necessarily supported part: 3. If you do not have a CN1610 or supported stand-alone cluster swithces -> temporarily utilize a 10-gig switch and convert the 3250 from switchless to switched. 4. I think you might be able to add the FAS8200s into the cluster, they will operate like 9.1 controllers. 5. If they get moved in, you can then vol move the snapmirror volumes in 6. After they are all moved, remove the FAS3250 nodes from the cluster 7. Convert the switched cluster back to a switchless cluster.
again: This process is probably NOT SUPPORTED, but if you are in a pinch, it may work. It might be worth opening a case to see if you can temporarily add 9.2 to a 9.1 cluster.
--tmac
Tim McCarthy, Principal Consultant
Proud Member of the #NetAppATeamhttps://twitter.com/NetAppATeam
I Blog at TMACsRackhttps://tmacsrack.wordpress.com/
On Mon, Nov 27, 2017 at 10:46 AM, Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> wrote: We have a FAS3250 that's primarily backup storage. It hosts a lot of snapmirror targets, and a few mounted filesystems with backup data.
As the hardware is getting old, we've purchased a shiny new FAS8200 with a couple of 8TB drive shelves.
We assume that the FAS8200 can be initialized on 9.2 and above using the ADP on external drives feature, to partition those 8T disks, so we don't have to throw away about 50T of storage just on the root aggregates.
However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP.
And you can't initialize first and partition later, because repartitioning wipes all existing data.
That's our dilemma.
I've actually tried to "manually" partition the disks, by going into maintenance mode and using "disk partition" to force the 8200 nodes to see only partitioned disks, and then re-initializing them using boot menu "4 - clean config and reinitialize disks". That fails, and 9.1 doesn't want to write a root FS to partitioned disks (it does wipe them, though). Besides, it would likely be unsupported. I did learn some interesting things about the "disk partition" command, by the way, like the "-b" blocksize is in 4k blocks, and the '-i' option that numbers partitions starts at 1, which is a data partition, and number 2 is the root partition, at least in the 2-partition root/data setup.
So we either have the option to throw away a large chunk of storage for root aggregates, initialize the new nodes on 9.1, join in cluster, and move the existing data using 'vol move' and all the cDOT goodness that comes with it.
... or build the new 8200 as a separate cluster, initialize it with 9.3, partition the disks via the boot menu, and then move the data over using snapmirror, and remounting the clients. That's doable because most of the data is snapmirror target anyway, and there's a limited number of mounted filesystems that would need a remount. It's a shame that cDOT doesn't have "snapmirror migrate" like 7mode did.
Does anyone have any other options? All I could think is get swing gear and basically do the migration twice, first to hardware that supports ontap >= 9.2, then to our new 8200. But I'm not willing to spend a lot of money renting the swing gear and getting a lot of extra setup work basically because of a flaw in netapp software.
Thanks,
-- Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> "Any sufficiently advanced incompetence is indistinguishable from malice." - Grey's Law
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Still had the 9.3 document open (it’s just not supported by default):
You might also need to enter a mixed version state for a technical refresh or an interrupted upgrade. In such cases you can override the ONTAP 9.3 default behavior and join nodes of a different version using the following advance privilege commands:
* cluster join -allow-mixed-version-join * cluster add-node -allow-mixed-version-join
And I think the automated non-disruptive upgrade handles this with no problem.
---Karl
From: Alexander Griesser [mailto:AGriesser@anexia-it.com] Sent: Monday, November 27, 2017 11:28 AM To: Konnerth, Karl Karl.Konnerth@netapp.com; tmac tmacmd@gmail.com; Jan-Pieter Cornet johnpc@xs4all.nl Cc: Toasters@teaparty.net Subject: AW: Chicken/egg dillema doing a hardware upgrade on a FAS3250
Offtopic here, but if it is not supported to run clusters in a mixed state with 9.3, how would you update a 4 node cluster then? Will the shiny `cluster image update` wizard automagically take care of the secret flags then or will it update one node per HA pair simultaneously?
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Konnerth, Karl Gesendet: Montag, 27. November 2017 20:05 An: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com>; Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> Cc: Toasters@teaparty.netmailto:Toasters@teaparty.net Betreff: RE: Chicken/egg dillema doing a hardware upgrade on a FAS3250
Hi,
It is possible to temporarily have a cluster with mixed versions such as 9.1 with 9.2. You should check with Support about your specific combination, and the capabilities you need to complete the transition (like moving the SnapMirror targets).
This is from the 9.1 Upgrade Express Guide:
Mixed version requirements ONTAP clusters can operate for a limited time in a mixed version state, in which HA pairs in a cluster are running ONTAP versions from different releases. However, the upgrade is not complete until all HA pairs are running the new target release. When the cluster is in a mixed version state, you should not enter any commands that alter the cluster operation or configuration except as necessary to satisfy the upgrade requirements. You should complete the upgrade as quickly as possible; do not allow the cluster to remain in a mixed version state longer than necessary. An HA pair must not run an ONTAP version from a release that is different from other HA pairs in the cluster for more than seven days.
Beginning 9.3 this is not supported by default, but there is an advanced privilege command to allow mixed versions.
Regards,
---Karl
From: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: Monday, November 27, 2017 8:03 AM To: Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> Cc: Toasters@teaparty.netmailto:Toasters@teaparty.net Subject: Re: Chicken/egg dillema doing a hardware upgrade on a FAS3250
here is an idea...not necessarily supported, but an idea:
1. Install 9.2P1 on your FAS8200 controllers. 2. Initialize them with the Root-Data Partitioning Here is the not necessarily supported part: 3. If you do not have a CN1610 or supported stand-alone cluster swithces -> temporarily utilize a 10-gig switch and convert the 3250 from switchless to switched. 4. I think you might be able to add the FAS8200s into the cluster, they will operate like 9.1 controllers. 5. If they get moved in, you can then vol move the snapmirror volumes in 6. After they are all moved, remove the FAS3250 nodes from the cluster 7. Convert the switched cluster back to a switchless cluster.
again: This process is probably NOT SUPPORTED, but if you are in a pinch, it may work. It might be worth opening a case to see if you can temporarily add 9.2 to a 9.1 cluster.
--tmac
Tim McCarthy, Principal Consultant
Proud Member of the #NetAppATeamhttps://twitter.com/NetAppATeam
I Blog at TMACsRackhttps://tmacsrack.wordpress.com/
On Mon, Nov 27, 2017 at 10:46 AM, Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> wrote: We have a FAS3250 that's primarily backup storage. It hosts a lot of snapmirror targets, and a few mounted filesystems with backup data.
As the hardware is getting old, we've purchased a shiny new FAS8200 with a couple of 8TB drive shelves.
We assume that the FAS8200 can be initialized on 9.2 and above using the ADP on external drives feature, to partition those 8T disks, so we don't have to throw away about 50T of storage just on the root aggregates.
However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP.
And you can't initialize first and partition later, because repartitioning wipes all existing data.
That's our dilemma.
I've actually tried to "manually" partition the disks, by going into maintenance mode and using "disk partition" to force the 8200 nodes to see only partitioned disks, and then re-initializing them using boot menu "4 - clean config and reinitialize disks". That fails, and 9.1 doesn't want to write a root FS to partitioned disks (it does wipe them, though). Besides, it would likely be unsupported. I did learn some interesting things about the "disk partition" command, by the way, like the "-b" blocksize is in 4k blocks, and the '-i' option that numbers partitions starts at 1, which is a data partition, and number 2 is the root partition, at least in the 2-partition root/data setup.
So we either have the option to throw away a large chunk of storage for root aggregates, initialize the new nodes on 9.1, join in cluster, and move the existing data using 'vol move' and all the cDOT goodness that comes with it.
... or build the new 8200 as a separate cluster, initialize it with 9.3, partition the disks via the boot menu, and then move the data over using snapmirror, and remounting the clients. That's doable because most of the data is snapmirror target anyway, and there's a limited number of mounted filesystems that would need a remount. It's a shame that cDOT doesn't have "snapmirror migrate" like 7mode did.
Does anyone have any other options? All I could think is get swing gear and basically do the migration twice, first to hardware that supports ontap >= 9.2, then to our new 8200. But I'm not willing to spend a lot of money renting the swing gear and getting a lot of extra setup work basically because of a flaw in netapp software.
Thanks,
-- Jan-Pieter Cornet <johnpc@xs4all.nlmailto:johnpc@xs4all.nl> "Any sufficiently advanced incompetence is indistinguishable from malice." - Grey's Law
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
It _is_ supported to _run_ a cluster with a mixed version state for a limited time. (NDO upgrade) No secret flags necessary.
I's a different thing to allow "new" nodes to _join_ the cluster and bring it into a mixed state....
my 2c
On 17/11/27 8:28 PM, Alexander Griesser wrote:
Offtopic here, but if it is not supported to run clusters in a mixed state with 9.3, how would you update a 4 node cluster then?
Will the shiny `cluster image update` wizard automagically take care of the secret flags then or will it update one node per HA pair simultaneously?
Best,
*Alexander Griesser*
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.com mailto:AGriesser@anexia-it.com
Web: http://www.anexia-it.com http://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
*Von:* toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] *Im Auftrag von *Konnerth, Karl *Gesendet:* Montag, 27. November 2017 20:05 *An:* tmac tmacmd@gmail.com; Jan-Pieter Cornet johnpc@xs4all.nl *Cc:* Toasters@teaparty.net *Betreff:* RE: Chicken/egg dillema doing a hardware upgrade on a FAS3250
Hi,
It is possible to temporarily have a cluster with mixed versions such as 9.1 with 9.2. You should check with Support about your specific combination, and the capabilities you need to complete the transition (like moving the SnapMirror targets).
This is from the 9.1 Upgrade Express Guide:
*Mixed version requirements*
ONTAP clusters can operate for a limited time in a mixed version state, in which HA pairs in a
cluster are running ONTAP versions from different releases. However, the upgrade is not complete
until all HA pairs are running the new target release. When the cluster is in a mixed version state, you
should not enter any commands that alter the cluster operation or configuration except as necessary to
satisfy the upgrade requirements.
You should complete the upgrade as quickly as possible; do not allow the cluster to remain in a
mixed version state longer than necessary. An HA pair must not run an ONTAP version from a
release that is different from other HA pairs in the cluster for more than seven days.
Beginning 9.3 this is not supported by default, but there is an advanced privilege command to allow mixed versions.
Regards,
---Karl
*From:*toasters-bounces@teaparty.net mailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Monday, November 27, 2017 8:03 AM *To:* Jan-Pieter Cornet <johnpc@xs4all.nl mailto:johnpc@xs4all.nl> *Cc:* Toasters@teaparty.net mailto:Toasters@teaparty.net *Subject:* Re: Chicken/egg dillema doing a hardware upgrade on a FAS3250
here is an idea...not necessarily supported, but an idea:
Install 9.2P1 on your FAS8200 controllers.
Initialize them with the Root-Data Partitioning
Here is the not necessarily supported part:
- If you do not have a CN1610 or supported stand-alone cluster swithces
-> temporarily utilize a 10-gig switch and convert the 3250 from switchless to switched.
- I think you might be able to add the FAS8200s into the cluster,
they will operate like 9.1 controllers.
If they get moved in, you can then vol move the snapmirror volumes in
After they are all moved, remove the FAS3250 nodes from the cluster
Convert the switched cluster back to a switchless cluster.
again: This process is probably NOT SUPPORTED, but if you are in a pinch, it may work.
It might be worth opening a case to see if you can temporarily add 9.2 to a 9.1 cluster.
--tmac
*Tim McCarthy, */Principal Consultant/
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at **TMACsRack https://tmacsrack.wordpress.com/*
On Mon, Nov 27, 2017 at 10:46 AM, Jan-Pieter Cornet <johnpc@xs4all.nl mailto:johnpc@xs4all.nl> wrote:
We have a FAS3250 that's primarily backup storage. It hosts a lot of snapmirror targets, and a few mounted filesystems with backup data. As the hardware is getting old, we've purchased a shiny new FAS8200 with a couple of 8TB drive shelves. We assume that the FAS8200 can be initialized on 9.2 and above using the ADP on external drives feature, to partition those 8T disks, so we don't have to throw away about 50T of storage just on the root aggregates. However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP. And you can't initialize first and partition later, because repartitioning wipes all existing data. That's our dilemma. I've actually tried to "manually" partition the disks, by going into maintenance mode and using "disk partition" to force the 8200 nodes to see only partitioned disks, and then re-initializing them using boot menu "4 - clean config and reinitialize disks". That fails, and 9.1 doesn't want to write a root FS to partitioned disks (it does wipe them, though). Besides, it would likely be unsupported. I did learn some interesting things about the "disk partition" command, by the way, like the "-b" blocksize is in 4k blocks, and the '-i' option that numbers partitions starts at 1, which is a data partition, and number 2 is the root partition, at least in the 2-partition root/data setup. So we either have the option to throw away a large chunk of storage for root aggregates, initialize the new nodes on 9.1, join in cluster, and move the existing data using 'vol move' and all the cDOT goodness that comes with it. ... or build the new 8200 as a separate cluster, initialize it with 9.3, partition the disks via the boot menu, and then move the data over using snapmirror, and remounting the clients. That's doable because most of the data is snapmirror target anyway, and there's a limited number of mounted filesystems that would need a remount. It's a shame that cDOT doesn't have "snapmirror migrate" like 7mode did. Does anyone have any other options? All I could think is get swing gear and basically do the migration twice, first to hardware that supports ontap >= 9.2, then to our new 8200. But I'm not willing to spend a lot of money renting the swing gear and getting a lot of extra setup work basically because of a flaw in netapp software. Thanks, -- Jan-Pieter Cornet <johnpc@xs4all.nl <mailto:johnpc@xs4all.nl>> "Any sufficiently advanced incompetence is indistinguishable from malice." - Grey's Law _______________________________________________ Toasters mailing list Toasters@teaparty.net <mailto:Toasters@teaparty.net> http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Following up to my own question to recap. TL;DR: It worked! We added 9.2 nodes to the 9.1 cluster, moved everything, then kicked off the 9.1 nodes.
On 27-11-17 16:46, Jan-Pieter Cornet wrote:
We have a FAS3250 that's primarily backup storage.[...] However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP.
And you can't initialize first and partition later, because repartitioning wipes all existing data.
Back to our starting position... we started out with a 2-node 3250 cluster running 9.1, and just added two 8200 nodes also running 9.1(P7).
That configuration wasted 6 disks for root aggrs, which on 8T disks is a lot.
The 4 nodes were already connected using the proper switches (which we hired from netapp specifically for the upgrade).
After a few failed attempts at trying to get 9.1 nodes to partition disks manually (which is possible in maintenance mode) and installing an OS on them (which seems impossible on 9.1), we started by kicking the 8200 out of the cluster, and (without disconnecting the switches! :) create a new 2-node cluster of the 8200s.
The machines are physically located at a remote site so we didn't want to drive up there too often just to fiddle with cluster interconnects, so we didn't. Apparently having 2 clusters share cluster switches works (but is likely unsupported).
We upgraded the 8200 nodes to 9.2P1, and then we initialised those again using the option "9" on the boot menu, creating root partitions. That part was relatively easy.
Next, we tore down that cluster again, and joined the 8200 nodes, now running 9.2P1, in the 3250 cluster. From then on, whenever you log in, you get a notice at login saying:
Warning: The cluster is in a mixed version state. Update all of the nodes to the same version as soon as possible.
Or in other words: here be dragons. And we did find some.
For starters, first thing we had to do on the 8200s was to create the data aggrs.
That only worked on one of the nodes. One node failed with a timeout, leaving the cluster without an aggregate, while the aggregate was eventually created but only visible in "node shell", by giving 7-mode commands via "node run -node NODENAME aggr status". The timeout was likely caused by the fact that several additional drives needed to be partitioned to create the aggr (which is very neat - you really only loose the minimum possible space with that setup).
Support was pouting a bit at that configuration and didn't come with a solution, so we fixed it ourselves, wiping the faulty aggr by first taking it offline in 7-mode (node run -node NODENAME aggr offline FAULTYAGGR), and then running "aggr remove-stale-record" in diag mode. There's unfortunately no way to import an aggr in cDOT, not even an empty one. (That I know of).
Then we simply tried again to create the aggr, but this time we connected to the console of the node where the aggr had to be made (one of the 8200s). That node is in fact running the new version (9.2P1), even though "version" still shows 9.1. This time, it worked.
Then we started "vol move". That went without much of an incident, except that it took quite a while (about a week). We made sure to only run one "vol move" per aggr in parallel from the 3250 nodes, as not to overload it. The moves went faster as more volumes migrated to the 8200s.
Halfway through, we noticed one of the new ethernet cables to the 8200 was faulty resulting in a lot of CRC errors on the link, and unreliable/slow network connections. That caused a bit of extra lag in snapmirror, but could fortunately be easily fixed by swapping a cable. However, one of the snapmirror relations now complains about "CSM: Operation referred to a non-existent session.", and in experimenting we again noticed that it matters on which node you issue commands. It seemed to work better (or at least different) if connected to a node running a new version, instead of a node running the older version (what is output by "run local version" matters).
We migrated all LIFs to the new nodes, and proceeded to remove the 3250 from the cluster. That again resulted in an error if you tried it while connected to the 3250 (probably again due to the underlying version of the node).
Connected to a 8200 node, we were able to remove the first 3250, but the second failed with "Cannot unjoin node X because it is the last node in the cluster with an earlier version of Data ONTAP than the rest of the cluster. Upgrade the node and then try to unjoin it again.". Fortunately, there is a diag mode "cluster unjoin -skip-last-low-version-node-check", and that worked. Immediately, "version" on the cluster shell reported the new version.
The cluster now consists of just the two 8200, with partitioned disks for root aggrs, and all of the data moved without any downtime using "vol move". The old nodes are being wiped.
Thanks a lot for the helpful replies! As special tip of the hat to tmac, who very quickly pointed us in the right direction. That really helped a lot!
Thanks for sharing this info! This will hopefully help someone else down the line.
Jan-Pieter> Following up to my own question to recap. TL;DR: It Jan-Pieter> worked! We added 9.2 nodes to the 9.1 cluster, moved Jan-Pieter> everything, then kicked off the 9.1 nodes.
Jan-Pieter> On 27-11-17 16:46, Jan-Pieter Cornet wrote:
We have a FAS3250 that's primarily backup storage.[...] However... the FAS3250 hardware can only run ontap 9.1. Newer versions of ontap are not available on that hardware. And to make the 8200 nodes join the existing cluster, it has to run the same sofware version (9.1), so... it cannot be initialized with ADP.
And you can't initialize first and partition later, because repartitioning wipes all existing data.
Jan-Pieter> Back to our starting position... we started out with a 2-node 3250 cluster running 9.1, and just added two 8200 nodes also running 9.1(P7).
Jan-Pieter> That configuration wasted 6 disks for root aggrs, which on 8T disks is a lot.
Jan-Pieter> The 4 nodes were already connected using the proper switches (which we hired from netapp specifically for the upgrade).
Jan-Pieter> After a few failed attempts at trying to get 9.1 nodes to partition disks manually (which is possible in maintenance mode) and installing an OS on them (which seems impossible on 9.1), we started by kicking the 8200 out of the cluster, and (without disconnecting the switches! :) create a new 2-node cluster of the 8200s.
Jan-Pieter> The machines are physically located at a remote site so we didn't want to drive up there too often just to fiddle with cluster interconnects, so we didn't. Apparently having 2 clusters share cluster switches works (but is likely unsupported).
Jan-Pieter> We upgraded the 8200 nodes to 9.2P1, and then we initialised those again using the option "9" on the boot menu, creating root partitions. That part was relatively easy.
Jan-Pieter> Next, we tore down that cluster again, and joined the 8200 nodes, now running 9.2P1, in the 3250 cluster. From then on, whenever you log in, you get a notice at login saying:
Warning: The cluster is in a mixed version state. Update all of the nodes to the same version as soon as possible.
Jan-Pieter> Or in other words: here be dragons. And we did find some.
Jan-Pieter> For starters, first thing we had to do on the 8200s was to create the data aggrs.
Jan-Pieter> That only worked on one of the nodes. One node failed with a timeout, leaving the cluster without an aggregate, while the aggregate was eventually created but only visible in "node shell", by giving 7-mode commands via "node run -node NODENAME aggr status". The timeout was likely caused by the fact that several additional drives needed to be partitioned to create the aggr (which is very neat - you really only loose the minimum possible space with that setup).
Jan-Pieter> Support was pouting a bit at that configuration and didn't come with a solution, so we fixed it ourselves, wiping the faulty aggr by first taking it offline in 7-mode (node run -node NODENAME aggr offline FAULTYAGGR), and then running "aggr remove-stale-record" in diag mode. There's unfortunately no way to import an aggr in cDOT, not even an empty one. (That I know of).
Jan-Pieter> Then we simply tried again to create the aggr, but this time we connected to the console of the node where the aggr had to be made (one of the 8200s). That node is in fact running the new version (9.2P1), even though "version" still shows 9.1. This time, it worked.
Jan-Pieter> Then we started "vol move". That went without much of an incident, except that it took quite a while (about a week). We made sure to only run one "vol move" per aggr in parallel from the 3250 nodes, as not to overload it. The moves went faster as more volumes migrated to the 8200s.
Jan-Pieter> Halfway through, we noticed one of the new ethernet cables to the 8200 was faulty resulting in a lot of CRC errors on the link, and unreliable/slow network connections. That caused a bit of extra lag in snapmirror, but could fortunately be easily fixed by swapping a cable. However, one of the snapmirror relations now complains about "CSM: Operation referred to a non-existent session.", and in experimenting we again noticed that it matters on which node you issue commands. It seemed to work better (or at least different) if connected to a node running a new version, instead of a node running the older version (what is output by "run local version" matters).
Jan-Pieter> We migrated all LIFs to the new nodes, and proceeded to remove the 3250 from the cluster. That again resulted in an error if you tried it while connected to the 3250 (probably again due to the underlying version of the node).
Jan-Pieter> Connected to a 8200 node, we were able to remove the first 3250, but the second failed with "Cannot unjoin node X because it is the last node in the cluster with an earlier version of Data ONTAP than the rest of the cluster. Upgrade the node and then try to unjoin it again.". Fortunately, there is a diag mode "cluster unjoin -skip-last-low-version-node-check", and that worked. Immediately, "version" on the cluster shell reported the new version.
Jan-Pieter> The cluster now consists of just the two 8200, with partitioned disks for root aggrs, and all of the data moved without any downtime using "vol move". The old nodes are being wiped.
Jan-Pieter> Thanks a lot for the helpful replies! As special tip of the hat to tmac, who very quickly pointed us in the right direction. That really helped a lot!
Jan-Pieter> -- Jan-Pieter> Jan-Pieter Cornet johnpc@xs4all.nl Jan-Pieter> "Any sufficiently advanced incompetence is indistinguishable from malice." Jan-Pieter> - Grey's Law
Jan-Pieter> [DELETED ATTACHMENT signature.asc, application/pgp-signature] Jan-Pieter> _______________________________________________ Jan-Pieter> Toasters mailing list Jan-Pieter> Toasters@teaparty.net Jan-Pieter> http://www.teaparty.net/mailman/listinfo/toasters