This isn't 100% NetApp related, more like 60%, but I think it's still a relevant topic of discussion that others may eventually benefit from too.
I've been tasked with setting up DR for a set of VMware VMs at our primary data center by replicating them via SnapMirror to our secondary data center. Normally I'd just use VMware SRM with the NetApp SRA, set up Array Based Replication within SRM, and be done with it. Let SRM and the SRA take care of test, failover, reprotect, etc.
However, the sticking point is that a lot of the protected VMs use in-guest NFS mounts for things like database volumes or application binaries and SRM doesn't know what to do with those SnapMirror pair volumes because they're not VMware datastores.
My initial thought was to use some PowerShell with the NetApp PowerShell Toolkit in the SRM Recovery Plan for each individual VM pre-boot to connect to the SnapMirror destination filer, break the SnapMirror(s) that the VM uses, mount the volume(s), set export policies, etc. Then figure out how to correctly do resync and reprotect without blowing things up.
I got as far as the first part in PowerShell (connect to the destination filer, break the mirror) and thought to myself that this was stupid.. I'm re-inventing the wheel. Surely someone else out there must have done this before and there is a pre-existing solution? I spoke with a NetApp tech support engineer that I happened to have a case open with already regarding the NetApp SRA and he was able to come up with a suggestion from another engineer: Call WFA from SRM in each VMs recovery plan to break the SnapMirror for the volumes required for the VM to work, then also use WFA in the cleanup and reprotect plan to do that work.
Has anyone else done something like this? I have a feeling that I'm not breaking new ground with this task. Implementing it wrong has pretty severe implications, though, so I'm not eager to mess it up.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948 / ian.ehrenwald@hbgusa.com
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
I've been working on this off and on between my regular duties, then delayed by holiday breaks, etc.
Something I haven't been able to figure out is how to programmatically get a Snapmirror relationship's source cluster management address hostname (or even just an IP address). I can easily get the source vserver name, source volume name, and even a pre-constructed combination of the two from what Get-NcSnapmirror returns when run against the destination controller.
In essence, I want to be able to pass to this script three arguments: Current Snapmirror destination cluster mgmt address, current snapmirror destination vserver, and current Snapmirror destination volume. Then do some stuff to extract the rest of the required information out to reverse the relationship and resync, such as current snapmirror source cluster address, current snapmirror source vserver, and current snapmirror source volume. I'm 66% of the way there, it's the current snapmirror source cluster address that I can't see to pull out of anywhere.
I was playing around with Get-NcVserverPeer when run against a Snapmirror destination, but I can't seem to get what I want there.
Any hints from people who have done this before? Thanks so much.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948 / ian.ehrenwald@hbgusa.com
________________________________________ From: Ehrenwald, Ian Sent: Thursday, December 14, 2017 11:40:41 AM To: toasters@teaparty.net Subject: VMware SRM + NetApp SRA + NetApp WFA?
This isn't 100% NetApp related, more like 60%, but I think it's still a relevant topic of discussion that others may eventually benefit from too.
I've been tasked with setting up DR for a set of VMware VMs at our primary data center by replicating them via SnapMirror to our secondary data center. Normally I'd just use VMware SRM with the NetApp SRA, set up Array Based Replication within SRM, and be done with it. Let SRM and the SRA take care of test, failover, reprotect, etc.
However, the sticking point is that a lot of the protected VMs use in-guest NFS mounts for things like database volumes or application binaries and SRM doesn't know what to do with those SnapMirror pair volumes because they're not VMware datastores.
My initial thought was to use some PowerShell with the NetApp PowerShell Toolkit in the SRM Recovery Plan for each individual VM pre-boot to connect to the SnapMirror destination filer, break the SnapMirror(s) that the VM uses, mount the volume(s), set export policies, etc. Then figure out how to correctly do resync and reprotect without blowing things up.
I got as far as the first part in PowerShell (connect to the destination filer, break the mirror) and thought to myself that this was stupid.. I'm re-inventing the wheel. Surely someone else out there must have done this before and there is a pre-existing solution? I spoke with a NetApp tech support engineer that I happened to have a case open with already regarding the NetApp SRA and he was able to come up with a suggestion from another engineer: Call WFA from SRM in each VMs recovery plan to break the SnapMirror for the volumes required for the VM to work, then also use WFA in the cleanup and reprotect plan to do that work.
Has anyone else done something like this? I have a feeling that I'm not breaking new ground with this task. Implementing it wrong has pretty severe implications, though, so I'm not eager to mess it up.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948 / ian.ehrenwald@hbgusa.com
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
I don't think you will be getting that the way you think.
SnapMirror in Clustered ONTAP is vastly different. For starters, you are supposed to have at least one Intercluster LIF per node. This is because when SnapMirror kicks off, it will use the InterCluster LIF on the node that the volume actually resides.
If you do a "vol move" from node 1 to node, the next time, SnapMirror will use the InterCluster LIF on node 2.
Same idea goes for the destination IP. It will go to where ever the volume/aggregate resides.
To get the source address, you will need to do some calculations: source volume name -> aggregate name -> aggregate owner -> intercluster LIF on node owner. That would be your source address. Again, if the volume changes nodes, then the source address will change!
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Wed, Jan 3, 2018 at 11:11 AM, Ehrenwald, Ian Ian.Ehrenwald@hbgusa.com wrote:
I've been working on this off and on between my regular duties, then delayed by holiday breaks, etc.
Something I haven't been able to figure out is how to programmatically get a Snapmirror relationship's source cluster management address hostname (or even just an IP address). I can easily get the source vserver name, source volume name, and even a pre-constructed combination of the two from what Get-NcSnapmirror returns when run against the destination controller.
In essence, I want to be able to pass to this script three arguments: Current Snapmirror destination cluster mgmt address, current snapmirror destination vserver, and current Snapmirror destination volume. Then do some stuff to extract the rest of the required information out to reverse the relationship and resync, such as current snapmirror source cluster address, current snapmirror source vserver, and current snapmirror source volume. I'm 66% of the way there, it's the current snapmirror source cluster address that I can't see to pull out of anywhere.
I was playing around with Get-NcVserverPeer when run against a Snapmirror destination, but I can't seem to get what I want there.
Any hints from people who have done this before? Thanks so much.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948 / ian.ehrenwald@hbgusa.com
From: Ehrenwald, Ian Sent: Thursday, December 14, 2017 11:40:41 AM To: toasters@teaparty.net Subject: VMware SRM + NetApp SRA + NetApp WFA?
This isn't 100% NetApp related, more like 60%, but I think it's still a relevant topic of discussion that others may eventually benefit from too.
I've been tasked with setting up DR for a set of VMware VMs at our primary data center by replicating them via SnapMirror to our secondary data center. Normally I'd just use VMware SRM with the NetApp SRA, set up Array Based Replication within SRM, and be done with it. Let SRM and the SRA take care of test, failover, reprotect, etc.
However, the sticking point is that a lot of the protected VMs use in-guest NFS mounts for things like database volumes or application binaries and SRM doesn't know what to do with those SnapMirror pair volumes because they're not VMware datastores.
My initial thought was to use some PowerShell with the NetApp PowerShell Toolkit in the SRM Recovery Plan for each individual VM pre-boot to connect to the SnapMirror destination filer, break the SnapMirror(s) that the VM uses, mount the volume(s), set export policies, etc. Then figure out how to correctly do resync and reprotect without blowing things up.
I got as far as the first part in PowerShell (connect to the destination filer, break the mirror) and thought to myself that this was stupid.. I'm re-inventing the wheel. Surely someone else out there must have done this before and there is a pre-existing solution? I spoke with a NetApp tech support engineer that I happened to have a case open with already regarding the NetApp SRA and he was able to come up with a suggestion from another engineer: Call WFA from SRM in each VMs recovery plan to break the SnapMirror for the volumes required for the VM to work, then also use WFA in the cleanup and reprotect plan to do that work.
Has anyone else done something like this? I have a feeling that I'm not breaking new ground with this task. Implementing it wrong has pretty severe implications, though, so I'm not eager to mess it up.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948 / ian.ehrenwald@hbgusa.com
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Hi Tim Thanks for the speedy reply.
Forgive me, but I don't see source volume name, source aggregate name, source aggregate owner, or source intercluster LIF in the object properties that Get-NcSnapmirror returns. I'm sure I'm misunderstanding where you're going with it. I don't see how I would get that information at all, just given the input parameters of current destination cluster mgmt, current destination vserver, and current destination volume.
The reason I'd like to get the source cluster management hostname out of this is so I can Connect-NcController to it, then run Get-NcSnapmirrorDestination against it to verify that both sides of the Snapmirror relationship agree upon the state of things before proceeding. Then, I'd run Remove-NcSnapmirror against the current destination, then Invoke-NcSnapmirrorRelease against the current source, then New-NcSnapmirror against the current source to turn it into the destination side, then Invoke-NcSnapmirrorResync against the current source to actually do the reverse resync.
I know all of this works, because when I manually feed my script the information I am trying to programmatically get, it succeeds in both directions, and I can flip flop all day. However, I want to reduce the number of human inputs as much as possible for this critical operation, and the more that can be determined by a machine instead of a stupid human, the better :)
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948 / ian.ehrenwald@hbgusa.com
________________________________________ From: tmac tmacmd@gmail.com Sent: Wednesday, January 3, 2018 11:30:00 AM To: Ehrenwald, Ian Cc: toasters@teaparty.net Subject: Re: VMware SRM + NetApp SRA + NetApp WFA?
I don't think you will be getting that the way you think.
SnapMirror in Clustered ONTAP is vastly different. For starters, you are supposed to have at least one Intercluster LIF per node. This is because when SnapMirror kicks off, it will use the InterCluster LIF on the node that the volume actually resides.
If you do a "vol move" from node 1 to node, the next time, SnapMirror will use the InterCluster LIF on node 2.
Same idea goes for the destination IP. It will go to where ever the volume/aggregate resides.
To get the source address, you will need to do some calculations: source volume name -> aggregate name -> aggregate owner -> intercluster LIF on node owner. That would be your source address. Again, if the volume changes nodes, then the source address will change!
--tmac
Tim McCarthy, Principal Consultant
Proud Member of the #NetAppATeamhttps://twitter.com/NetAppATeam
I Blog at TMACsRackhttps://tmacsrack.wordpress.com/
On Wed, Jan 3, 2018 at 11:11 AM, Ehrenwald, Ian <Ian.Ehrenwald@hbgusa.commailto:Ian.Ehrenwald@hbgusa.com> wrote: I've been working on this off and on between my regular duties, then delayed by holiday breaks, etc.
Something I haven't been able to figure out is how to programmatically get a Snapmirror relationship's source cluster management address hostname (or even just an IP address). I can easily get the source vserver name, source volume name, and even a pre-constructed combination of the two from what Get-NcSnapmirror returns when run against the destination controller.
In essence, I want to be able to pass to this script three arguments: Current Snapmirror destination cluster mgmt address, current snapmirror destination vserver, and current Snapmirror destination volume. Then do some stuff to extract the rest of the required information out to reverse the relationship and resync, such as current snapmirror source cluster address, current snapmirror source vserver, and current snapmirror source volume. I'm 66% of the way there, it's the current snapmirror source cluster address that I can't see to pull out of anywhere.
I was playing around with Get-NcVserverPeer when run against a Snapmirror destination, but I can't seem to get what I want there.
Any hints from people who have done this before? Thanks so much.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948tel:1.617.263.1948 / ian.ehrenwald@hbgusa.commailto:ian.ehrenwald@hbgusa.com
________________________________________ From: Ehrenwald, Ian Sent: Thursday, December 14, 2017 11:40:41 AM To: toasters@teaparty.netmailto:toasters@teaparty.net Subject: VMware SRM + NetApp SRA + NetApp WFA?
This isn't 100% NetApp related, more like 60%, but I think it's still a relevant topic of discussion that others may eventually benefit from too.
I've been tasked with setting up DR for a set of VMware VMs at our primary data center by replicating them via SnapMirror to our secondary data center. Normally I'd just use VMware SRM with the NetApp SRA, set up Array Based Replication within SRM, and be done with it. Let SRM and the SRA take care of test, failover, reprotect, etc.
However, the sticking point is that a lot of the protected VMs use in-guest NFS mounts for things like database volumes or application binaries and SRM doesn't know what to do with those SnapMirror pair volumes because they're not VMware datastores.
My initial thought was to use some PowerShell with the NetApp PowerShell Toolkit in the SRM Recovery Plan for each individual VM pre-boot to connect to the SnapMirror destination filer, break the SnapMirror(s) that the VM uses, mount the volume(s), set export policies, etc. Then figure out how to correctly do resync and reprotect without blowing things up.
I got as far as the first part in PowerShell (connect to the destination filer, break the mirror) and thought to myself that this was stupid.. I'm re-inventing the wheel. Surely someone else out there must have done this before and there is a pre-existing solution? I spoke with a NetApp tech support engineer that I happened to have a case open with already regarding the NetApp SRA and he was able to come up with a suggestion from another engineer: Call WFA from SRM in each VMs recovery plan to break the SnapMirror for the volumes required for the VM to work, then also use WFA in the cleanup and reprotect plan to do that work.
Has anyone else done something like this? I have a feeling that I'm not breaking new ground with this task. Implementing it wrong has pretty severe implications, though, so I'm not eager to mess it up.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948tel:1.617.263.1948 / ian.ehrenwald@hbgusa.commailto:ian.ehrenwald@hbgusa.com
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toastershttp://www.teaparty.net/mailman/listinfo/toasters
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
I've looked for the same information (going the other way - given the primary, find the cluster management lif of the secondary) The only thing I've found is: ::*> debug smdb table xc_virtual_interface show
No idea if it is reliable/consistent/etc And insert standard warning about using debug... I have caused a panic by just doing a debug smdb table <table> show. I was looping through them all looking for the remote cluster management lifs.
9.3 seems to have added some functionality (snapmirror protect) that may be relevant or expose this info in some consumable way (haven't looked into it yet)
m
-----Original Message----- From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Ehrenwald, Ian Sent: Wednesday, January 3, 2018 10:48 AM To: tmac tmacmd@gmail.com Cc: toasters@teaparty.net Subject: Re: VMware SRM + NetApp SRA + NetApp WFA?
Hi Tim Thanks for the speedy reply.
Forgive me, but I don't see source volume name, source aggregate name, source aggregate owner, or source intercluster LIF in the object properties that Get-NcSnapmirror returns. I'm sure I'm misunderstanding where you're going with it. I don't see how I would get that information at all, just given the input parameters of current destination cluster mgmt, current destination vserver, and current destination volume.
The reason I'd like to get the source cluster management hostname out of this is so I can Connect-NcController to it, then run Get-NcSnapmirrorDestination against it to verify that both sides of the Snapmirror relationship agree upon the state of things before proceeding. Then, I'd run Remove-NcSnapmirror against the current destination, then Invoke-NcSnapmirrorRelease against the current source, then New-NcSnapmirror against the current source to turn it into the destination side, then Invoke-NcSnapmirrorResync against the current source to actually do the reverse resync.
I know all of this works, because when I manually feed my script the information I am trying to programmatically get, it succeeds in both directions, and I can flip flop all day. However, I want to reduce the number of human inputs as much as possible for this critical operation, and the more that can be determined by a machine instead of a stupid human, the better :)
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948 / ian.ehrenwald@hbgusa.com
________________________________________ From: tmac tmacmd@gmail.com Sent: Wednesday, January 3, 2018 11:30:00 AM To: Ehrenwald, Ian Cc: toasters@teaparty.net Subject: Re: VMware SRM + NetApp SRA + NetApp WFA?
I don't think you will be getting that the way you think.
SnapMirror in Clustered ONTAP is vastly different. For starters, you are supposed to have at least one Intercluster LIF per node. This is because when SnapMirror kicks off, it will use the InterCluster LIF on the node that the volume actually resides.
If you do a "vol move" from node 1 to node, the next time, SnapMirror will use the InterCluster LIF on node 2.
Same idea goes for the destination IP. It will go to where ever the volume/aggregate resides.
To get the source address, you will need to do some calculations: source volume name -> aggregate name -> aggregate owner -> intercluster LIF on node owner. That would be your source address. Again, if the volume changes nodes, then the source address will change!
--tmac
Tim McCarthy, Principal Consultant
Proud Member of the #NetAppATeamhttps://twitter.com/NetAppATeam
I Blog at TMACsRackhttps://tmacsrack.wordpress.com/
On Wed, Jan 3, 2018 at 11:11 AM, Ehrenwald, Ian <Ian.Ehrenwald@hbgusa.commailto:Ian.Ehrenwald@hbgusa.com> wrote: I've been working on this off and on between my regular duties, then delayed by holiday breaks, etc.
Something I haven't been able to figure out is how to programmatically get a Snapmirror relationship's source cluster management address hostname (or even just an IP address). I can easily get the source vserver name, source volume name, and even a pre-constructed combination of the two from what Get-NcSnapmirror returns when run against the destination controller.
In essence, I want to be able to pass to this script three arguments: Current Snapmirror destination cluster mgmt address, current snapmirror destination vserver, and current Snapmirror destination volume. Then do some stuff to extract the rest of the required information out to reverse the relationship and resync, such as current snapmirror source cluster address, current snapmirror source vserver, and current snapmirror source volume. I'm 66% of the way there, it's the current snapmirror source cluster address that I can't see to pull out of anywhere.
I was playing around with Get-NcVserverPeer when run against a Snapmirror destination, but I can't seem to get what I want there.
Any hints from people who have done this before? Thanks so much.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948tel:1.617.263.1948 / ian.ehrenwald@hbgusa.commailto:ian.ehrenwald@hbgusa.com
________________________________________ From: Ehrenwald, Ian Sent: Thursday, December 14, 2017 11:40:41 AM To: toasters@teaparty.netmailto:toasters@teaparty.net Subject: VMware SRM + NetApp SRA + NetApp WFA?
This isn't 100% NetApp related, more like 60%, but I think it's still a relevant topic of discussion that others may eventually benefit from too.
I've been tasked with setting up DR for a set of VMware VMs at our primary data center by replicating them via SnapMirror to our secondary data center. Normally I'd just use VMware SRM with the NetApp SRA, set up Array Based Replication within SRM, and be done with it. Let SRM and the SRA take care of test, failover, reprotect, etc.
However, the sticking point is that a lot of the protected VMs use in-guest NFS mounts for things like database volumes or application binaries and SRM doesn't know what to do with those SnapMirror pair volumes because they're not VMware datastores.
My initial thought was to use some PowerShell with the NetApp PowerShell Toolkit in the SRM Recovery Plan for each individual VM pre-boot to connect to the SnapMirror destination filer, break the SnapMirror(s) that the VM uses, mount the volume(s), set export policies, etc. Then figure out how to correctly do resync and reprotect without blowing things up.
I got as far as the first part in PowerShell (connect to the destination filer, break the mirror) and thought to myself that this was stupid.. I'm re-inventing the wheel. Surely someone else out there must have done this before and there is a pre-existing solution? I spoke with a NetApp tech support engineer that I happened to have a case open with already regarding the NetApp SRA and he was able to come up with a suggestion from another engineer: Call WFA from SRM in each VMs recovery plan to break the SnapMirror for the volumes required for the VM to work, then also use WFA in the cleanup and reprotect plan to do that work.
Has anyone else done something like this? I have a feeling that I'm not breaking new ground with this task. Implementing it wrong has pretty severe implications, though, so I'm not eager to mess it up.
Ian Ehrenwald Senior Infrastructure Engineer Hachette Book Group, Inc. 1.617.263.1948tel:1.617.263.1948 / ian.ehrenwald@hbgusa.commailto:ian.ehrenwald@hbgusa.com
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toastershttp://www.teaparty.net/mailman/listinfo/toasters
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Wouldn't you already have this information before you start? Or if you start at the primary, then once you find the destination, you would then probe the destination to find out the rest of the information you need?
Sure, reduce human interactions but assume that you will need to probe both systems to find and match up the configuration. And if it doesn't match exactly, then you have to bail and let the admins decide what to do.
"then once you find the destination" This is the part that is in question - Given a cluster and volume how do you find the cluster/SVM management address of the mirror/vault source/destination. You can easily get the cluster and SVM name, but those don't translate to the management addresses. As Tim said, you can get the ICL addresses from the peering relationship but I haven't found anything that will return the management address for a given peer cluster/SVM.
In our case, all our clusters have a uniform relationship between cluster name and management address so I can work around it - but I couldn't assume that for any other environment.
m
-----Original Message----- From: John Stoffel [mailto:john@stoffel.org] Sent: Wednesday, January 3, 2018 3:53 PM To: Weber, Mark A mark-a-weber@uiowa.edu Cc: Ehrenwald, Ian Ian.Ehrenwald@hbgusa.com; toasters@teaparty.net Subject: RE: VMware SRM + NetApp SRA + NetApp WFA?
Wouldn't you already have this information before you start? Or if you start at the primary, then once you find the destination, you would then probe the destination to find out the rest of the information you need?
Sure, reduce human interactions but assume that you will need to probe both systems to find and match up the configuration. And if it doesn't match exactly, then you have to bail and let the admins decide what to do.
This is a limitation on the Netapp cDOT design in my mind. But I also was thinking that you were attacking this from the wrong end, where you started at the destination and worked backwards. But I can see how that's a valid decision on your part to go that route.
Mark> "then once you find the destination" Mark> This is the part that is in question - Mark> Given a cluster and volume how do you find the cluster/SVM management address of the mirror/vault source/destination.
This is a problem, since it's not explicit, nor is it really implicit in the naming or setup of the relationship. I don't currently use SM relationships on cDOT, so I can't really contribute much to the discussion.
Mark> You can easily get the cluster and SVM name, but those don't Mark> translate to the management addresses. As Tim said, you can get Mark> the ICL addresses from the peering relationship but I haven't Mark> found anything that will return the management address for a Mark> given peer cluster/SVM.
Mark> In our case, all our clusters have a uniform relationship Mark> between cluster name and management address so I can work around Mark> it - but I couldn't assume that for any other environment.
You're right, it's not something you can assume at all. It's a tricky problem to solve. So if you're running multi-tenant, and you're one of the tenants and are SM'ing to another SVM, which you don't know the admin interface for... your out of luck. This could be because of lack of documentation, someone walking in front of a bus, etc.
I think filing a bug report with Netapp on this or a request for enhancement would be the way to go. The security implications are interesting. If you have VServer admin on both source and destination, but don't know the admin interface for one of them, how do you manage it? Should you be able to find that info? Or should it be locked down?
Mark> -----Original Message----- Mark> From: John Stoffel [mailto:john@stoffel.org] Mark> Sent: Wednesday, January 3, 2018 3:53 PM Mark> To: Weber, Mark A mark-a-weber@uiowa.edu Mark> Cc: Ehrenwald, Ian Ian.Ehrenwald@hbgusa.com; toasters@teaparty.net Mark> Subject: RE: VMware SRM + NetApp SRA + NetApp WFA?
Mark> Wouldn't you already have this information before you start? Or if you start at the primary, then once you find the destination, you would then probe the destination to find out the rest of the information you need?
Mark> Sure, reduce human interactions but assume that you will need to probe both systems to find and match up the configuration. And if it doesn't match exactly, then you have to bail and let the admins decide what to do.