Check to see if your volumes are thin provisioned on the destination side. By default, they would not be thin provisioned.
Regards, André M. Clark
On August 5, 2022 at 11:03:13, Timothy Naple via Toasters ( toasters@teaparty.net) wrote:
My first guess would be that you might somehow be retaining more snapshots on the destination, perhaps at the volume level? snap list for all the volumes is identical on source and destination? Does df -h on both sides show which volumes, if any, have a size discrepancy?
Maybe a 2nd guess would be that the space guarantee settings are different on the destination volumes.
------------------------------ *From:* Toasters toasters-bounces@teaparty.net on behalf of Heino Walther hw@beardmann.dk *Sent:* Friday, August 5, 2022 7:31 AM *To:* toasters@teaparty.net toasters@teaparty.net *Subject:* Space issues on older NetApp...
Hi there
We have two systems that mirror eachothers volumes via snapmirror.
We are talking 7mode ONTAP 8.1.4
The two systems have the same controller: FAS3240
They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used…
I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size…
The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
_______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Well since they are snapmirrored, they should be the same as the source? I cannot change the destination volumes as they are read-only (because they are snapmirror destinations)….. 😊
/Heino
Fra: André M. Clark andre.m.clark@gmail.com Dato: fredag, 5. august 2022 kl. 17.48 Til: Timothy Naple tnaple@vectordata.com, Heino Walther hw@beardmann.dk, toasters@teaparty.net toasters@teaparty.net Emne: Re: Check to see if your volumes are thin provisioned on the destination side. By default, they would not be thin provisioned.
Regards, André M. Clark
On August 5, 2022 at 11:03:13, Timothy Naple via Toasters (toasters@teaparty.netmailto:toasters@teaparty.net) wrote: My first guess would be that you might somehow be retaining more snapshots on the destination, perhaps at the volume level? snap list for all the volumes is identical on source and destination? Does df -h on both sides show which volumes, if any, have a size discrepancy?
Maybe a 2nd guess would be that the space guarantee settings are different on the destination volumes. ________________________________ From: Toasters <toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net> on behalf of Heino Walther <hw@beardmann.dkmailto:hw@beardmann.dk> Sent: Friday, August 5, 2022 7:31 AM To: toasters@teaparty.netmailto:toasters@teaparty.net <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: Space issues on older NetApp...
Hi there
We have two systems that mirror eachothers volumes via snapmirror.
We are talking 7mode ONTAP 8.1.4
The two systems have the same controller: FAS3240
They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used…
I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size…
The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Not neceraily true. Unless it is an AFF, depending on how the destination volume was created, the guarantee will be set to volume. You can modify this, even on a snap mirror destination. Take a look. 😉
On 5August 2022 at 14:09:28, Heino Walther (hw@beardmann.dk) wrote:
Well since they are snapmirrored, they should be the same as the source?
I cannot change the destination volumes as they are read-only (because they are snapmirror destinations)….. 😊
/Heino
*Fra: *André M. Clark andre.m.clark@gmail.com *Dato: *fredag, 5. august 2022 kl. 17.48 *Til: *Timothy Naple tnaple@vectordata.com, Heino Walther hw@beardmann.dk, toasters@teaparty.net toasters@teaparty.net *Emne: *Re:
Check to see if your volumes are thin provisioned on the destination side. By default, they would not be thin provisioned.
Regards,
André M. Clark
On August 5, 2022 at 11:03:13, Timothy Naple via Toasters ( toasters@teaparty.net) wrote:
My first guess would be that you might somehow be retaining more snapshots on the destination, perhaps at the volume level? snap list for all the volumes is identical on source and destination? Does df -h on both sides show which volumes, if any, have a size discrepancy?
Maybe a 2nd guess would be that the space guarantee settings are different on the destination volumes. ------------------------------
*From:* Toasters toasters-bounces@teaparty.net on behalf of Heino Walther hw@beardmann.dk *Sent:* Friday, August 5, 2022 7:31 AM *To:* toasters@teaparty.net toasters@teaparty.net *Subject:* Space issues on older NetApp...
Hi there
We have two systems that mirror eachothers volumes via snapmirror.
We are talking 7mode ONTAP 8.1.4
The two systems have the same controller: FAS3240
They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used…
I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size…
The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
_______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters