Hi. Maybe someone can help with a question. I would have submitted it to netapp tech support but was unable to describe the question in 250 characters or less. I apologize for the length.
We have three systems: o our production 270 (ONTAP 7.2.7) called netapp1 o a 2040, a relacement for the 270 (ONTAP 7.3.4), called netapp1-new o a 2040, used as a backup system, called netapp2
Currently, all of the production volumes (flexible, not Qtrees) on the 270 (netapp1) are being snapmirrored to both netapp1-new and netapp2 though it is not a cascade operation. Netapp1 is the source, separately, for the two 2040s. Running snapmirror destinations on netapp1 shows me the separate destinations.
In the near future I will replace the 270 with the 2040 called netapp1-new. I will:
o break the snapmirror relationship between the netapp1-based volumes on netapp1 and the corresponding snapmirrored volumes on netapp1-new, resulting in those netapp1-new volumes now being read/write and flexible, o shutdown netapp1, the 270, o run setup on netapp1-new, giving it the same EXACT configuration (IP, hostname, etc) as the now-shutdown 270, and o reboot the netapp1-new box, resulting it now being called netapp1 with the appropriate volumes.
I've swapped out the 270 and swapped in the 2040. But, the 2040 looks for all intents and purposes like it was the 270. Same hostname, same IPs. Exports, hosts.equiv, and netgroup will all be taken care of.
Here comes the issue. As I said, the netapp1 volumes, on the 270, were also being snapmirrored to netapp2, the second of the 2040s. Now that netapp1, the snapmirror source, is a 2040, can I continue to run snapmirror update from netapp2 on the netapp1-based volumes or is there no relationship anymore, forcing me to run snapmirror initialization from netapp2 on all the netapp1 volumes?
Worse case situation (not a deal breaker) - it will take one or two days to re-initialize versus the two hours the updates take.
Anyways, if someone could shed some light or point me to some info that would be great. I didn't find anything in the Snapmirror Overview and Best Practices Guide.
Michael Homa Enterprise Systems and Development Group Academic Computing and Communication Center University of Illinois at Chicago email: mhoma@uic.edu
Hello Michael
The Volume SnapMirror relationships are based on common snapshots. This means that if you dispense with netapp1-old, netapp1-new has all its snapshots including the common base snapshot for the replication with netapp2 and so you will be able to resync the relationship.
I would wait until resync is done before releasing the old redundant snapshot from the replication with netapp1-old.
If you are using qtree snapmirror it is a little more complicated; you will want to create a common snapshot and share among the three systems so that you can resync it.
As a netapp reseller we do this procedure quite often to carry out migrations or to pickup baselines and bring to the data centre.
Kind regards Kenneth
---------------------------------------- Hi. Maybe someone can help with a question. I would have submitted it to netapp tech support but was unable to describe the question in 250 characters or less. I apologize for the length.
We have three systems: o our production 270 (ONTAP 7.2.7) called netapp1 o a 2040, a relacement for the 270 (ONTAP 7.3.4), called netapp1-new o a 2040, used as a backup system, called netapp2
Currently, all of the production volumes (flexible, not Qtrees) on the 270 (netapp1) are being snapmirrored to both netapp1-new and netapp2 though it is not a cascade operation. Netapp1 is the source, separately, for the two 2040s. Running snapmirror destinations on netapp1 shows me the separate destinations.
In the near future I will replace the 270 with the 2040 called netapp1-new. I will:
o break the snapmirror relationship between the netapp1-based volumes on netapp1 and the corresponding snapmirrored volumes on netapp1-new, resulting in those netapp1-new volumes now being read/write and flexible, o shutdown netapp1, the 270, o run setup on netapp1-new, giving it the same EXACT configuration (IP, hostname, etc) as the now-shutdown 270, and o reboot the netapp1-new box, resulting it now being called netapp1 with the appropriate volumes.
I've swapped out the 270 and swapped in the 2040. But, the 2040 looks for all intents and purposes like it was the 270. Same hostname, same IPs. Exports, hosts.equiv, and netgroup will all be taken care of.
Here comes the issue. As I said, the netapp1 volumes, on the 270, were also being snapmirrored to netapp2, the second of the 2040s. Now that netapp1, the snapmirror source, is a 2040, can I continue to run snapmirror update from netapp2 on the netapp1-based volumes or is there no relationship anymore, forcing me to run snapmirror initialization from netapp2 on all the netapp1 volumes?
Worse case situation (not a deal breaker) - it will take one or two days to re-initialize versus the two hours the updates take.
Anyways, if someone could shed some light or point me to some info that would be great. I didn't find anything in the Snapmirror Overview and Best Practices Guide.
Michael Homa Enterprise Systems and Development Group Academic Computing and Communication Center University of Illinois at Chicago email: mhoma@uic.edu
_______________________________________________
Check out "snapmirror resync" you should be able to get it to work. You just need a common snapshot on the broken off source and destination volumes. After resync, snapmirror update will work again. Also check out "snapmirror destinations" and "snapmirror release" to get rid of extraneous snapmirror snapshots. But do this AFTER you resync, don't delete any snapshot that resync may need.
Hi. Maybe someone can help with a question. I would have submitted it to netapp tech support but was unable to describe the question in 250 characters or less. I apologize for the length.
We have three systems: o our production 270 (ONTAP 7.2.7) called netapp1 o a 2040, a relacement for the 270 (ONTAP 7.3.4), called netapp1-new o a 2040, used as a backup system, called netapp2
Currently, all of the production volumes (flexible, not Qtrees) on the 270 (netapp1) are being snapmirrored to both netapp1-new and netapp2 though it is not a cascade operation. Netapp1 is the source, separately, for the two 2040s. Running snapmirror destinations on netapp1 shows me the separate destinations.
In the near future I will replace the 270 with the 2040 called netapp1-new. I will:
o break the snapmirror relationship between the netapp1-based volumes on netapp1 and the corresponding snapmirrored volumes on netapp1-new, resulting in those netapp1-new volumes now being read/write and flexible, o shutdown netapp1, the 270, o run setup on netapp1-new, giving it the same EXACT configuration (IP, hostname, etc) as the now-shutdown 270, and o reboot the netapp1-new box, resulting it now being called netapp1 with the appropriate volumes.
I've swapped out the 270 and swapped in the 2040. But, the 2040 looks for all intents and purposes like it was the 270. Same hostname, same IPs. Exports, hosts.equiv, and netgroup will all be taken care of.
Here comes the issue. As I said, the netapp1 volumes, on the 270, were also being snapmirrored to netapp2, the second of the 2040s. Now that netapp1, the snapmirror source, is a 2040, can I continue to run snapmirror update from netapp2 on the netapp1-based volumes or is there no relationship anymore, forcing me to run snapmirror initialization from netapp2 on all the netapp1 volumes?
Worse case situation (not a deal breaker) - it will take one or two days to re-initialize versus the two hours the updates take.
Anyways, if someone could shed some light or point me to some info that would be great. I didn't find anything in the Snapmirror Overview and Best Practices Guide.
Michael Homa Enterprise Systems and Development Group Academic Computing and Communication Center University of Illinois at Chicago email: mhoma@uic.edu
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support