Guys,
Do to stupidity on my part, I managed to accidently delete a snapshot on the source side of a snapmirror pair, so that snapmirror can't continue. It's not critical, it's just our tools area, which isn't updated frequently.
So now I want to re-snapmirror, but I want to limit the downtime of the clients on the destination side as much as possible when I switch them to the new re-snapmirrored volume.
Currently, my clients mount east:/vol/tools, so what I'd like to do is:
west> snapmirror store tools rst0a,rst1a
east> vol create tools .... east> vol restrict tools east> snapmirror retrieve tools_new rst0a,rst1a
Now for the tricky part:
east> nfs off east> vol rename tools tools_old east> vol rename tools_new tools east> nfs on east> exportfs -a -v
so that all my clients see (on a read-only volume) is a quick outage or pause in NFS traffic, then they just continue on using tools and such as usual.
Or will I be forced to reboot all of my client machines one by one? Which will be truly truly truly painful...
The general snapmirror update and cleanup of the old volume is trivial. It's limiting the outage to my clients that I worry about.
Thanks, John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087
I think you can resynch if there are any common snapshots, did you call Netapp?
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of John Stoffel Sent: Monday, December 07, 2009 5:50 PM To: toasters@mathworks.com Subject: Redoing snapmirror to a destination
Guys,
Do to stupidity on my part, I managed to accidently delete a snapshot on the source side of a snapmirror pair, so that snapmirror can't continue. It's not critical, it's just our tools area, which isn't updated frequently.
So now I want to re-snapmirror, but I want to limit the downtime of the clients on the destination side as much as possible when I switch them to the new re-snapmirrored volume.
Currently, my clients mount east:/vol/tools, so what I'd like to do is:
west> snapmirror store tools rst0a,rst1a
east> vol create tools .... east> vol restrict tools east> snapmirror retrieve tools_new rst0a,rst1a
Now for the tricky part:
east> nfs off east> vol rename tools tools_old east> vol rename tools_new tools east> nfs on east> exportfs -a -v
so that all my clients see (on a read-only volume) is a quick outage or pause in NFS traffic, then they just continue on using tools and such as usual.
Or will I be forced to reboot all of my client machines one by one? Which will be truly truly truly painful...
The general snapmirror update and cleanup of the old volume is trivial. It's limiting the outage to my clients that I worry about.
Thanks, John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087
Please be advised that this email may contain confidential information. If you are not the intended recipient, please do not read, copy or re-transmit this email. If you have received this email in error, please notify us by email by replying to the sender and by telephone (call us collect at +1 202-828-0850) and delete this message and any attachments. Thank you in advance for your cooperation and assistance.
In addition, Danaher and its subsidiaries disclaim that the content of this email constitutes an offer to enter into, or the acceptance of, any contract or agreement or any amendment thereto; provided that the foregoing disclaimer does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment to this email.
The quickie rename won't work unless you hack the fsid of the new volume to match the fsid of the old volume. Even then, it is pretty high risk.
If it isn't updated often, and you have space, just do a new snapmirror, and change the mount point on the clients over time.
John
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of John Stoffel Sent: Monday, December 07, 2009 2:50 PM To: toasters@mathworks.com Subject: Redoing snapmirror to a destination
Guys,
Do to stupidity on my part, I managed to accidently delete a snapshot on the source side of a snapmirror pair, so that snapmirror can't continue. It's not critical, it's just our tools area, which isn't updated frequently.
So now I want to re-snapmirror, but I want to limit the downtime of the clients on the destination side as much as possible when I switch them to the new re-snapmirrored volume.
Currently, my clients mount east:/vol/tools, so what I'd like to do is:
west> snapmirror store tools rst0a,rst1a
east> vol create tools .... east> vol restrict tools east> snapmirror retrieve tools_new rst0a,rst1a
Now for the tricky part:
east> nfs off east> vol rename tools tools_old east> vol rename tools_new tools east> nfs on east> exportfs -a -v
so that all my clients see (on a read-only volume) is a quick outage or pause in NFS traffic, then they just continue on using tools and such as usual.
Or will I be forced to reboot all of my client machines one by one? Which will be truly truly truly painful...
The general snapmirror update and cleanup of the old volume is trivial. It's limiting the outage to my clients that I worry about.
Thanks, John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087
On Mon, Dec 07, 2009 at 05:50:20PM -0500, John Stoffel wrote:
Hi,
so that all my clients see (on a read-only volume) is a quick outage or pause in NFS traffic, then they just continue on using tools and such as usual.
Or will I be forced to reboot all of my client machines one by one? Which will be truly truly truly painful...
The trick is to keep the fsid the same. I doubt the steps you outline will do that. Is it possible to tell a filer to use a particular fsid for a volume? (I guess snapmirror does not change inodes, what is also needed here.)
Grtnx,
VSM (and therefore SM2T) will preserve inodes, but it will be up to you to preserve the FSID of the original volume on east. If you go into priv set advanced, you will find 'vol read_fsid' and 'vol rewrite_fsid' which can accomplish this. Be aware that Rewriting the fsid requires the volume to be restricted and you will need to rewrite the original one first to something totally different. FSIDs must be unique not only within a controller but within an HA-pair if you want failover to work properly.
While these commands are perfectly safe when used properly, they are guns so point them away from you (i.e. know what you're doing before you it or you could have a disruption you don't want).
-- Adam Fox Systems Engineer adamfox@netapp.com
-----Original Message----- From: John Stoffel [mailto:john.stoffel@taec.toshiba.com] Sent: Monday, December 07, 2009 5:50 PM To: toasters@mathworks.com Subject: Redoing snapmirror to a destination
Guys,
Do to stupidity on my part, I managed to accidently delete a snapshot on the source side of a snapmirror pair, so that snapmirror can't continue. It's not critical, it's just our tools area, which isn't updated frequently.
So now I want to re-snapmirror, but I want to limit the downtime of the clients on the destination side as much as possible when I switch them to the new re-snapmirrored volume.
Currently, my clients mount east:/vol/tools, so what I'd like to do is:
west> snapmirror store tools rst0a,rst1a
east> vol create tools .... east> vol restrict tools east> snapmirror retrieve tools_new rst0a,rst1a
Now for the tricky part:
east> nfs off east> vol rename tools tools_old east> vol rename tools_new tools east> nfs on east> exportfs -a -v
so that all my clients see (on a read-only volume) is a quick outage or pause in NFS traffic, then they just continue on using tools and such as usual.
Or will I be forced to reboot all of my client machines one by one? Which will be truly truly truly painful...
The general snapmirror update and cleanup of the old volume is trivial. It's limiting the outage to my clients that I worry about.
Thanks, John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087
It might be a good idea to test your procedure within a pair of snapmirrored simulator instances before you do this with live data. That saved my behind more than once :)
Richard Barlow Senior Systems Engineer Virtualization Specialist NetApp 804.929.2500
www.netapp.com
-----Original Message----- From: Fox, Adam Sent: Monday, December 07, 2009 6:56 PM To: John Stoffel; toasters@mathworks.com Subject:
VSM (and therefore SM2T) will preserve inodes, but it will be up to you to preserve the FSID of the original volume on east. If you go into priv set advanced, you will find 'vol read_fsid' and 'vol rewrite_fsid' which can accomplish this. Be aware that Rewriting the fsid requires the volume to be restricted and you will need to rewrite the original one first to something totally different. FSIDs must be unique not only within a controller but within an HA-pair if you want failover to work properly.
While these commands are perfectly safe when used properly, they are guns so point them away from you (i.e. know what you're doing before you it or you could have a disruption you don't want).
-- Adam Fox Systems Engineer adamfox@netapp.com
-----Original Message----- From: John Stoffel [mailto:john.stoffel@taec.toshiba.com] Sent: Monday, December 07, 2009 5:50 PM To: toasters@mathworks.com Subject: Redoing snapmirror to a destination
Guys,
Do to stupidity on my part, I managed to accidently delete a snapshot on the source side of a snapmirror pair, so that snapmirror can't continue. It's not critical, it's just our tools area, which isn't updated frequently.
So now I want to re-snapmirror, but I want to limit the downtime of the clients on the destination side as much as possible when I switch them to the new re-snapmirrored volume.
Currently, my clients mount east:/vol/tools, so what I'd like to do is:
west> snapmirror store tools rst0a,rst1a
east> vol create tools .... east> vol restrict tools east> snapmirror retrieve tools_new rst0a,rst1a
Now for the tricky part:
east> nfs off east> vol rename tools tools_old east> vol rename tools_new tools east> nfs on east> exportfs -a -v
so that all my clients see (on a read-only volume) is a quick outage or pause in NFS traffic, then they just continue on using tools and such as usual.
Or will I be forced to reboot all of my client machines one by one? Which will be truly truly truly painful...
The general snapmirror update and cleanup of the old volume is trivial. It's limiting the outage to my clients that I worry about.
Thanks, John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087
"Fox," == Fox, Adam Adam.Fox@netapp.com writes:
Fox> VSM (and therefore SM2T) will preserve inodes, but it will be up Fox> to you to preserve the FSID of the original volume on east. If Fox> you go into priv set advanced, you will find 'vol read_fsid' and Fox> 'vol rewrite_fsid' which can accomplish this. Be aware that Fox> Rewriting the fsid requires the volume to be restricted and you Fox> will need to rewrite the original one first to something totally Fox> different. FSIDs must be unique not only within a controller but Fox> within an HA-pair if you want failover to work properly.
Ouch! Sounds funky and possibly trouble.
While I don't have any common snapshots between west> and east> filers, I do have some common snapshots on each filer for other snapmirror relationships, since west> is also snapmirroring tools to two other sites.
Would it be possible to do the following:
1) quiece all snapmirrors of tools from west> to south> and north> - will this leave me with two snap shots for the west->north pair, or will snapmirror automatically cleanup when done?
2) comment out tools entries on all filers: west, south, north & east.
3) on west> (my source), do 'snap rename north(####)_tools.1234 \ east(######)_tools.1234'
4) on east> snapmirror resynnc tools
- wait until competed
5) on west> snap rename east(######)_tools.1234 \ north(######)_tools.1234
6) do snap update tools on east, north, south to make sure they can all update again.
7) re-enable snapmirror.conf entries for tools.
Or I wonder if I need to interrupt the west->north snapmirror in progress, and make sure I only rename the more recent snapshot, so that I still have valid snapshots between west->north, and now a second one to re-purpose as my west->east pair base?
Any hope?
Thanks, John
Fox> While these commands are perfectly safe when used properly, they are Fox> guns so point them away from you (i.e. know what you're doing before you Fox> it or you could have a disruption you don't want).
Fox> -- Adam Fox Fox> Systems Engineer Fox> adamfox@netapp.com
Fox,> -----Original Message----- Fox,> From: John Stoffel [mailto:john.stoffel@taec.toshiba.com] Fox,> Sent: Monday, December 07, 2009 5:50 PM Fox,> To: toasters@mathworks.com Fox,> Subject: Redoing snapmirror to a destination
Fox,> Guys,
Fox,> Do to stupidity on my part, I managed to accidently delete a snapshot Fox,> on the source side of a snapmirror pair, so that snapmirror can't Fox,> continue. It's not critical, it's just our tools area, which isn't Fox,> updated frequently.
Fox,> So now I want to re-snapmirror, but I want to limit the downtime of Fox,> the clients on the destination side as much as possible when I switch Fox,> them to the new re-snapmirrored volume.
Fox,> Currently, my clients mount east:/vol/tools, so what I'd like to do Fox,> is:
west> snapmirror store tools rst0a,rst1a
east> vol create tools .... east> vol restrict tools east> snapmirror retrieve tools_new rst0a,rst1a
Fox,> Now for the tricky part:
east> nfs off east> vol rename tools tools_old east> vol rename tools_new tools east> nfs on east> exportfs -a -v
Fox,> so that all my clients see (on a read-only volume) is a quick outage Fox,> or pause in NFS traffic, then they just continue on using tools and Fox,> such as usual.
Fox,> Or will I be forced to reboot all of my client machines one by one? Fox,> Which will be truly truly truly painful...
Fox,> The general snapmirror update and cleanup of the old volume is Fox,> trivial. It's limiting the outage to my clients that I worry about.
Fox,> Thanks, Fox,> John Fox,> John Stoffel - Senior Staff Systems Administrator - System LSI Group Fox,> Toshiba America Electronic Components, Inc. - Fox,> http://www.toshiba.com/taec Fox,> john.stoffel@taec.toshiba.com - 508-486-1087
You might want to look at
Solution ID: kb40272 Last updated: 30 SEP 2009
Sample procedure using ' snapmirror migrate ' that avoids remounting in a NFS environment
Joel
John Stoffel wrote:
"Fox," == Fox, Adam Adam.Fox@netapp.com writes:
Fox> VSM (and therefore SM2T) will preserve inodes, but it will be up Fox> to you to preserve the FSID of the original volume on east. If Fox> you go into priv set advanced, you will find 'vol read_fsid' and Fox> 'vol rewrite_fsid' which can accomplish this. Be aware that Fox> Rewriting the fsid requires the volume to be restricted and you Fox> will need to rewrite the original one first to something totally Fox> different. FSIDs must be unique not only within a controller but Fox> within an HA-pair if you want failover to work properly.
Ouch! Sounds funky and possibly trouble.
While I don't have any common snapshots between west> and east> filers, I do have some common snapshots on each filer for other snapmirror relationships, since west> is also snapmirroring tools to two other sites.
Would it be possible to do the following:
1) quiece all snapmirrors of tools from west> to south> and north> - will this leave me with two snap shots for the west->north pair, or will snapmirror automatically cleanup when done? 2) comment out tools entries on all filers: west, south, north & east. 3) on west> (my source), do 'snap rename north(####)_tools.1234 \ east(######)_tools.1234' 4) on east> snapmirror resynnc tools - wait until competed 5) on west> snap rename east(######)_tools.1234 \ north(######)_tools.1234 6) do snap update tools on east, north, south to make sure they can all update again. 7) re-enable snapmirror.conf entries for tools.
Or I wonder if I need to interrupt the west->north snapmirror in progress, and make sure I only rename the more recent snapshot, so that I still have valid snapshots between west->north, and now a second one to re-purpose as my west->east pair base?
Any hope?
Thanks, John
Fox> While these commands are perfectly safe when used properly, they are Fox> guns so point them away from you (i.e. know what you're doing before you Fox> it or you could have a disruption you don't want).
Fox> -- Adam Fox Fox> Systems Engineer Fox> adamfox@netapp.com
Fox,> -----Original Message----- Fox,> From: John Stoffel [mailto:john.stoffel@taec.toshiba.com] Fox,> Sent: Monday, December 07, 2009 5:50 PM Fox,> To: toasters@mathworks.com Fox,> Subject: Redoing snapmirror to a destination
Fox,> Guys,
Fox,> Do to stupidity on my part, I managed to accidently delete a snapshot Fox,> on the source side of a snapmirror pair, so that snapmirror can't Fox,> continue. It's not critical, it's just our tools area, which isn't Fox,> updated frequently.
Fox,> So now I want to re-snapmirror, but I want to limit the downtime of Fox,> the clients on the destination side as much as possible when I switch Fox,> them to the new re-snapmirrored volume.
Fox,> Currently, my clients mount east:/vol/tools, so what I'd like to do Fox,> is:
west> snapmirror store tools rst0a,rst1a
east> vol create tools .... east> vol restrict tools east> snapmirror retrieve tools_new rst0a,rst1a
Fox,> Now for the tricky part:
east> nfs off east> vol rename tools tools_old east> vol rename tools_new tools east> nfs on east> exportfs -a -v
Fox,> so that all my clients see (on a read-only volume) is a quick outage Fox,> or pause in NFS traffic, then they just continue on using tools and Fox,> such as usual.
Fox,> Or will I be forced to reboot all of my client machines one by one? Fox,> Which will be truly truly truly painful...
Fox,> The general snapmirror update and cleanup of the old volume is Fox,> trivial. It's limiting the outage to my clients that I worry about.
Fox,> Thanks, Fox,> John Fox,> John Stoffel - Senior Staff Systems Administrator - System LSI Group Fox,> Toshiba America Electronic Components, Inc. - Fox,> http://www.toshiba.com/taec Fox,> john.stoffel@taec.toshiba.com - 508-486-1087