Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley < jeff.cleverley@avagotech.com> wrote:
Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
Hi Jeff,
On 23/04/15 15:23, Jeff Cleverley wrote:
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
It should create a new snapshot based on the last nightly. Snapshots are linear, unless you start doing things with FlexClones, which it doesn't seem like in this case.
Thanks,
If you run snap list on your source, you will see some snaps tagged as snap vault. These are the ones your filer is using as baseline and will need a resync if deleted.
There is probably only one - the most recent. The others can go.without affecting sv. On 23 Apr 2015 08:26, "Jeff Cleverley" jeff.cleverley@avagotech.com wrote:
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley < jeff.cleverley@avagotech.com> wrote:
Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Here is a list of the SV snapshots in the source file system. I've deleted the local nightly and hourly. I'm not certain when the new data showed up. I'm hoping if I can delete the oldest snapvault snapshot I might be able to jump past the massive bump. I've got 2 nightly backups showing snapvault,busy with the currently running using the nightly.3 one.
0% ( 0%) 0% ( 0%) Apr 22 23:00 foo_nightly.0 (acs)
16% ( 0%) 6% ( 0%) Apr 21 23:00 foo_nightly.1
58% (37%) 45% (20%) Apr 18 04:00 foo_weekly.0 (acs)
60% (12%) 49% ( 4%) Apr 15 23:01 foo_nightly.2 (busy,snapvault)
63% (16%) 55% ( 6%) Apr 11 04:00 foo_weekly.1
63% ( 2%) 56% ( 1%) Mar 23 23:00 foo_nightly.3 (busy,snapvault)
I'm trying to figure out how to delete some of these snapshots such as nightly and get a weekly to update based on a new weekly.0. I'm guessing none of this will work, but thought I'd see if there was a way :-)
Jeff
On Thu, Apr 23, 2015 at 1:35 AM, basilberntsen@gmail.com wrote:
My experience is with snapmirror, but snapvault is similar. If you delete a snapshot on the source, that deletion will be propagated to the target.
You mentioned something I believe is being misunderstood. The replication software doesn't use existing periodic snapshots, but creates its own. When you do (on the source) a "snap list" command, you will see that one of them says "busy snapmirrored" or something similar. If a user did a massive write, you have no option other than optimizing the WAN and waiting. Alternately, you could delete the data that was written and then do a full baseline.
Sent from my BlackBerry 10 smartphone on the Bell network. *From: *Jeff Cleverley *Sent: *Thursday, April 23, 2015 9:26 AM *To: *Toasters@teaparty.net *Subject: *Re: Multiple snapvault snapshots and schedules.
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley < jeff.cleverley@avagotech.com> wrote:
Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
Base snapshots are per SV relationship, but I believe you should be able to resync using “snapvault start -r” if you remove base snapshot. It should pick up differences since the latest common snapshot then.
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Jeff Cleverley Sent: Thursday, April 23, 2015 10:50 AM To: Basil B Cc: Toasters@teaparty.net Subject: Re: Multiple snapvault snapshots and schedules.
Here is a list of the SV snapshots in the source file system. I've deleted the local nightly and hourly. I'm not certain when the new data showed up. I'm hoping if I can delete the oldest snapvault snapshot I might be able to jump past the massive bump. I've got 2 nightly backups showing snapvault,busy with the currently running using the nightly.3 one.
0% ( 0%) 0% ( 0%) Apr 22 23:00 foo_nightly.0 (acs) 16% ( 0%) 6% ( 0%) Apr 21 23:00 foo_nightly.1 58% (37%) 45% (20%) Apr 18 04:00 foo_weekly.0 (acs) 60% (12%) 49% ( 4%) Apr 15 23:01 foo_nightly.2 (busy,snapvault) 63% (16%) 55% ( 6%) Apr 11 04:00 foo_weekly.1 63% ( 2%) 56% ( 1%) Mar 23 23:00 foo_nightly.3 (busy,snapvault) I'm trying to figure out how to delete some of these snapshots such as nightly and get a weekly to update based on a new weekly.0. I'm guessing none of this will work, but thought I'd see if there was a way :-)
Jeff
On Thu, Apr 23, 2015 at 1:35 AM, <basilberntsen@gmail.commailto:basilberntsen@gmail.com> wrote: My experience is with snapmirror, but snapvault is similar. If you delete a snapshot on the source, that deletion will be propagated to the target.
You mentioned something I believe is being misunderstood. The replication software doesn't use existing periodic snapshots, but creates its own. When you do (on the source) a "snap list" command, you will see that one of them says "busy snapmirrored" or something similar. If a user did a massive write, you have no option other than optimizing the WAN and waiting. Alternately, you could delete the data that was written and then do a full baseline.
Sent from my BlackBerry 10 smartphone on the Bell network. From: Jeff Cleverley Sent: Thursday, April 23, 2015 9:26 AM To: <Toasters@teaparty.netmailto:Toasters@teaparty.net> Subject: Re: Multiple snapvault snapshots and schedules.
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley <jeff.cleverley@avagotech.commailto:jeff.cleverley@avagotech.com> wrote: Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
My experience with SV/SM is that if you delete the base snapshot you will need to re-baseline the set. Subsequent snaps reference the baseline so if there is no common baseline snap they become invalid since they need a reference point to track changes.
On Apr 23, 2015, at 3:20 AM, Borzenkov, Andrei <andrei.borzenkov@ts.fujitsu.commailto:andrei.borzenkov@ts.fujitsu.com> wrote:
Base snapshots are per SV relationship, but I believe you should be able to resync using “snapvault start -r” if you remove base snapshot. It should pick up differences since the latest common snapshot then.
From: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Jeff Cleverley Sent: Thursday, April 23, 2015 10:50 AM To: Basil B Cc: <Toasters@teaparty.netmailto:Toasters@teaparty.net> Subject: Re: Multiple snapvault snapshots and schedules.
Here is a list of the SV snapshots in the source file system. I've deleted the local nightly and hourly. I'm not certain when the new data showed up. I'm hoping if I can delete the oldest snapvault snapshot I might be able to jump past the massive bump. I've got 2 nightly backups showing snapvault,busy with the currently running using the nightly.3 one.
0% ( 0%) 0% ( 0%) Apr 22 23:00 foo_nightly.0 (acs) 16% ( 0%) 6% ( 0%) Apr 21 23:00 foo_nightly.1 58% (37%) 45% (20%) Apr 18 04:00 foo_weekly.0 (acs) 60% (12%) 49% ( 4%) Apr 15 23:01 foo_nightly.2 (busy,snapvault) 63% (16%) 55% ( 6%) Apr 11 04:00 foo_weekly.1 63% ( 2%) 56% ( 1%) Mar 23 23:00 foo_nightly.3 (busy,snapvault) I'm trying to figure out how to delete some of these snapshots such as nightly and get a weekly to update based on a new weekly.0. I'm guessing none of this will work, but thought I'd see if there was a way :-)
Jeff
On Thu, Apr 23, 2015 at 1:35 AM, <basilberntsen@gmail.commailto:basilberntsen@gmail.com> wrote: My experience is with snapmirror, but snapvault is similar. If you delete a snapshot on the source, that deletion will be propagated to the target.
You mentioned something I believe is being misunderstood. The replication software doesn't use existing periodic snapshots, but creates its own. When you do (on the source) a "snap list" command, you will see that one of them says "busy snapmirrored" or something similar. If a user did a massive write, you have no option other than optimizing the WAN and waiting. Alternately, you could delete the data that was written and then do a full baseline.
Sent from my BlackBerry 10 smartphone on the Bell network. From: Jeff Cleverley Sent: Thursday, April 23, 2015 9:26 AM To: <Toasters@teaparty.netmailto:Toasters@teaparty.net> Subject: Re: Multiple snapvault snapshots and schedules.
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley <jeff.cleverley@avagotech.commailto:jeff.cleverley@avagotech.com> wrote: Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611 _______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
It's definitely not the case for SM - you can re sync using any common snapshot, it does not to be created by SM.
Отправлено с iPhone
24 апр. 2015 г., в 4:41, Tony Bar <tbar@BERKCOM.commailto:tbar@BERKCOM.com> написал(а):
My experience with SV/SM is that if you delete the base snapshot you will need to re-baseline the set. Subsequent snaps reference the baseline so if there is no common baseline snap they become invalid since they need a reference point to track changes.
On Apr 23, 2015, at 3:20 AM, Borzenkov, Andrei <andrei.borzenkov@ts.fujitsu.commailto:andrei.borzenkov@ts.fujitsu.com> wrote:
Base snapshots are per SV relationship, but I believe you should be able to resync using “snapvault start -r” if you remove base snapshot. It should pick up differences since the latest common snapshot then.
From: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Jeff Cleverley Sent: Thursday, April 23, 2015 10:50 AM To: Basil B Cc: <Toasters@teaparty.netmailto:Toasters@teaparty.net> Subject: Re: Multiple snapvault snapshots and schedules.
Here is a list of the SV snapshots in the source file system. I've deleted the local nightly and hourly. I'm not certain when the new data showed up. I'm hoping if I can delete the oldest snapvault snapshot I might be able to jump past the massive bump. I've got 2 nightly backups showing snapvault,busy with the currently running using the nightly.3 one.
0% ( 0%) 0% ( 0%) Apr 22 23:00 foo_nightly.0 (acs) 16% ( 0%) 6% ( 0%) Apr 21 23:00 foo_nightly.1 58% (37%) 45% (20%) Apr 18 04:00 foo_weekly.0 (acs) 60% (12%) 49% ( 4%) Apr 15 23:01 foo_nightly.2 (busy,snapvault) 63% (16%) 55% ( 6%) Apr 11 04:00 foo_weekly.1 63% ( 2%) 56% ( 1%) Mar 23 23:00 foo_nightly.3 (busy,snapvault) I'm trying to figure out how to delete some of these snapshots such as nightly and get a weekly to update based on a new weekly.0. I'm guessing none of this will work, but thought I'd see if there was a way :-)
Jeff
On Thu, Apr 23, 2015 at 1:35 AM, <basilberntsen@gmail.commailto:basilberntsen@gmail.com> wrote: My experience is with snapmirror, but snapvault is similar. If you delete a snapshot on the source, that deletion will be propagated to the target.
You mentioned something I believe is being misunderstood. The replication software doesn't use existing periodic snapshots, but creates its own. When you do (on the source) a "snap list" command, you will see that one of them says "busy snapmirrored" or something similar. If a user did a massive write, you have no option other than optimizing the WAN and waiting. Alternately, you could delete the data that was written and then do a full baseline.
Sent from my BlackBerry 10 smartphone on the Bell network. From: Jeff Cleverley Sent: Thursday, April 23, 2015 9:26 AM To: <Toasters@teaparty.netmailto:Toasters@teaparty.net> Subject: Re: Multiple snapvault snapshots and schedules.
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley <jeff.cleverley@avagotech.commailto:jeff.cleverley@avagotech.com> wrote: Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611 _______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
I just wanted to give a quick update on this. The snapvault that was running finally finished. Another has since taken its place. Once this one manages to complete I should be good to go for a while. It is the only one showing much for %used.
I ran a snap reclaimable on all the snapshots in the volume. It wasn't helpful. The total of all snapshots reported just over 500G when the df was showing over 2 TB of used data in .snapshot :-)
The volume guarantee is still disabled due to the space used being larger than the size allocated to the volume, but it is closer to fitting in the original size. The aggregate is not longer maxed out and is looking better also.
Thanks for the suggestions and help on this.
Jeff
On Thu, Apr 23, 2015 at 11:07 PM, Borzenkov, Andrei < andrei.borzenkov@ts.fujitsu.com> wrote:
It's definitely not the case for SM - you can re sync using any common snapshot, it does not to be created by SM.
Отправлено с iPhone
24 апр. 2015 г., в 4:41, Tony Bar tbar@BERKCOM.com написал(а):
My experience with SV/SM is that if you delete the base snapshot you will need to re-baseline the set. Subsequent snaps reference the baseline so if there is no common baseline snap they become invalid since they need a reference point to track changes.
On Apr 23, 2015, at 3:20 AM, Borzenkov, Andrei < andrei.borzenkov@ts.fujitsu.com> wrote:
Base snapshots are per SV relationship, but I believe you should be able to resync using “snapvault start -r” if you remove base snapshot. It should pick up differences since the latest common snapshot then.
*From:* toasters-bounces@teaparty.net [ mailto:toasters-bounces@teaparty.net toasters-bounces@teaparty.net] *On Behalf Of *Jeff Cleverley *Sent:* Thursday, April 23, 2015 10:50 AM *To:* Basil B *Cc:* Toasters@teaparty.net *Subject:* Re: Multiple snapvault snapshots and schedules.
Here is a list of the SV snapshots in the source file system. I've deleted the local nightly and hourly. I'm not certain when the new data showed up. I'm hoping if I can delete the oldest snapvault snapshot I might be able to jump past the massive bump. I've got 2 nightly backups showing snapvault,busy with the currently running using the nightly.3 one.
0% ( 0%) 0% ( 0%) Apr 22 23:00 foo_nightly.0 (acs)
16% ( 0%) 6% ( 0%) Apr 21 23:00 foo_nightly.1
58% (37%) 45% (20%) Apr 18 04:00 foo_weekly.0 (acs)
60% (12%) 49% ( 4%) Apr 15 23:01 foo_nightly.2 (busy,snapvault)
63% (16%) 55% ( 6%) Apr 11 04:00 foo_weekly.1
63% ( 2%) 56% ( 1%) Mar 23 23:00 foo_nightly.3 (busy,snapvault)
I'm trying to figure out how to delete some of these snapshots such as nightly and get a weekly to update based on a new weekly.0. I'm guessing none of this will work, but thought I'd see if there was a way :-)
Jeff
On Thu, Apr 23, 2015 at 1:35 AM, basilberntsen@gmail.com wrote:
My experience is with snapmirror, but snapvault is similar. If you delete a snapshot on the source, that deletion will be propagated to the target.
You mentioned something I believe is being misunderstood. The replication software doesn't use existing periodic snapshots, but creates its own. When you do (on the source) a "snap list" command, you will see that one of them says "busy snapmirrored" or something similar. If a user did a massive write, you have no option other than optimizing the WAN and waiting. Alternately, you could delete the data that was written and then do a full baseline.
Sent from my BlackBerry 10 smartphone on the Bell network.
*From: *Jeff Cleverley
*Sent: *Thursday, April 23, 2015 9:26 AM
*To: *Toasters@teaparty.net
*Subject: *Re: Multiple snapvault snapshots and schedules.
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley < jeff.cleverley@avagotech.com> wrote:
Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
--
Jeff Cleverley IT Engineer
4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
--
Jeff Cleverley IT Engineer
4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
--
Jeff Cleverley IT Engineer
4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Thanks for circling back.
.
On Apr 28, 2015, at 6:25 PM, Jeff Cleverley <jeff.cleverley@avagotech.commailto:jeff.cleverley@avagotech.com> wrote:
I just wanted to give a quick update on this. The snapvault that was running finally finished. Another has since taken its place. Once this one manages to complete I should be good to go for a while. It is the only one showing much for %used.
I ran a snap reclaimable on all the snapshots in the volume. It wasn't helpful. The total of all snapshots reported just over 500G when the df was showing over 2 TB of used data in .snapshot :-)
The volume guarantee is still disabled due to the space used being larger than the size allocated to the volume, but it is closer to fitting in the original size. The aggregate is not longer maxed out and is looking better also.
Thanks for the suggestions and help on this.
Jeff
On Thu, Apr 23, 2015 at 11:07 PM, Borzenkov, Andrei <andrei.borzenkov@ts.fujitsu.commailto:andrei.borzenkov@ts.fujitsu.com> wrote: It's definitely not the case for SM - you can re sync using any common snapshot, it does not to be created by SM.
Отправлено с iPhone
24 апр. 2015 г., в 4:41, Tony Bar <tbar@BERKCOM.commailto:tbar@BERKCOM.com> написал(а):
My experience with SV/SM is that if you delete the base snapshot you will need to re-baseline the set. Subsequent snaps reference the baseline so if there is no common baseline snap they become invalid since they need a reference point to track changes.
On Apr 23, 2015, at 3:20 AM, Borzenkov, Andrei <andrei.borzenkov@ts.fujitsu.commailto:andrei.borzenkov@ts.fujitsu.com> wrote:
Base snapshots are per SV relationship, but I believe you should be able to resync using “snapvault start -r” if you remove base snapshot. It should pick up differences since the latest common snapshot then.
From: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Jeff Cleverley Sent: Thursday, April 23, 2015 10:50 AM To: Basil B Cc: <Toasters@teaparty.netmailto:Toasters@teaparty.net> Subject: Re: Multiple snapvault snapshots and schedules.
Here is a list of the SV snapshots in the source file system. I've deleted the local nightly and hourly. I'm not certain when the new data showed up. I'm hoping if I can delete the oldest snapvault snapshot I might be able to jump past the massive bump. I've got 2 nightly backups showing snapvault,busy with the currently running using the nightly.3 one.
0% ( 0%) 0% ( 0%) Apr 22 23:00 foo_nightly.0 (acs) 16% ( 0%) 6% ( 0%) Apr 21 23:00 foo_nightly.1 58% (37%) 45% (20%) Apr 18 04:00 foo_weekly.0 (acs) 60% (12%) 49% ( 4%) Apr 15 23:01 foo_nightly.2 (busy,snapvault) 63% (16%) 55% ( 6%) Apr 11 04:00 foo_weekly.1 63% ( 2%) 56% ( 1%) Mar 23 23:00 foo_nightly.3 (busy,snapvault) I'm trying to figure out how to delete some of these snapshots such as nightly and get a weekly to update based on a new weekly.0. I'm guessing none of this will work, but thought I'd see if there was a way :-)
Jeff
On Thu, Apr 23, 2015 at 1:35 AM, <basilberntsen@gmail.commailto:basilberntsen@gmail.com> wrote: My experience is with snapmirror, but snapvault is similar. If you delete a snapshot on the source, that deletion will be propagated to the target.
You mentioned something I believe is being misunderstood. The replication software doesn't use existing periodic snapshots, but creates its own. When you do (on the source) a "snap list" command, you will see that one of them says "busy snapmirrored" or something similar. If a user did a massive write, you have no option other than optimizing the WAN and waiting. Alternately, you could delete the data that was written and then do a full baseline.
Sent from my BlackBerry 10 smartphone on the Bell network. From: Jeff Cleverley Sent: Thursday, April 23, 2015 9:26 AM To: <Toasters@teaparty.netmailto:Toasters@teaparty.net> Subject: Re: Multiple snapvault snapshots and schedules.
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley <jeff.cleverley@avagotech.commailto:jeff.cleverley@avagotech.com> wrote: Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611 _______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611 _______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Do you use the same destination volume for both schedules or they are stored in different volumes on secondary?
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Jeff Cleverley Sent: Thursday, April 23, 2015 10:24 AM To: Toasters@teaparty.net Subject: Re: Multiple snapvault snapshots and schedules.
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley <jeff.cleverley@avagotech.commailto:jeff.cleverley@avagotech.com> wrote: Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
The both go to the same destination volume. That is why I was hoping to be able to skip ahead a little bit leveraging one set of snapshots (nightly or weekly), then start it back up if I can get past the trouble point.
Jeff
On Thu, Apr 23, 2015 at 1:37 AM, Borzenkov, Andrei < andrei.borzenkov@ts.fujitsu.com> wrote:
Do you use the same destination volume for both schedules or they are stored in different volumes on secondary?
*From:* toasters-bounces@teaparty.net [mailto: toasters-bounces@teaparty.net] *On Behalf Of *Jeff Cleverley *Sent:* Thursday, April 23, 2015 10:24 AM *To:* Toasters@teaparty.net *Subject:* Re: Multiple snapvault snapshots and schedules.
I realized I wasn't being very clear in the early hours. My apologies. Here is what I'm really wondering.
If there is a weekly and nightly schedule, and I delete the weekly snapshots on the source, what will happen the next time the weekly schedule runs? Will it create a new snapshot based on the last nightly, or will it do a new full baseline since the previous weekly snapshot has been deleted?
Thanks,
Jeff
On Thu, Apr 23, 2015 at 1:10 AM, Jeff Cleverley < jeff.cleverley@avagotech.com> wrote:
Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
--
Jeff Cleverley IT Engineer
4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
--
Jeff Cleverley IT Engineer
4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611
If you have 2TB of new files (as implied by drag’n’drop) deleting snapshots won’t really help to recover space. New files are new files, they have to be transferred irrespectively of what had been deleted.
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Jeff Cleverley Sent: Thursday, April 23, 2015 10:11 AM To: Toasters@teaparty.net Subject: Multiple snapvault snapshots and schedules.
Greetings,
I've got a situation where a backup relationship does a nightly backup and a weekly backup. Both go to the same destination. The snapshots are named with the _nightly and _weekly tags. This is file system data and not luns. The problem is not fractional reserve :-)
I'm assuming the snapshots are still based on one another and each doesn't have it's own baseline. Is this correct?
The problem is someone at a remote site appears to have done a drag and drop which generated ~2 TB of "new" data. The remote site has a very limited WAN. The completion of the current backup session will run for ~2 month before it completes. If I have to do a new baseline, it will also take about that long.
Since the snapshots got out of hand, it caused the file system to grow beyond the designated size the volume guarantee has been disabled, and it actually filled up the containing aggregate.
I'm hoping to be able to delete one of the SV snapvaults to free space and get things in line space wise. Unfortunately I'm expecting I'll still have to let a 2 month backup run one way or another. If I'll end up with a new baseline anyway, I may as well nuke everything now and start it. At least I'll be able to delete some snapshots and free space.
Thanks,
Jeff
-- Jeff Cleverley IT Engineer 4380 Ziegler Road Fort Collins, Colorado 80525 970-288-4611