Hi there
We have two systems that mirror eachothers volumes via snapmirror. We are talking 7mode ONTAP 8.1.4 The two systems have the same controller: FAS3240 They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used… I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size… The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
aggr show_space -h
Maybe this gives you an indication of where the missing space is allocated at.
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Heino Walther Gesendet: Freitag, 5. August 2022 16:32 An: toasters@teaparty.net Betreff: [EXT] Space issues on older NetApp...
ATTENTION: This email originated from outside of the organisation. Do not click on links or open attachments unless you recognize the sender and know the content is safe
Hi there
We have two systems that mirror eachothers volumes via snapmirror. We are talking 7mode ONTAP 8.1.4 The two systems have the same controller: FAS3240 They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used… I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size… The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
Well yes and no 😊
If I run aggr show_space on the source, I can see that SIS (dedupe) saves us for 2-3 TB per aggregate. If I run the same command on the destination, it doesn’t show any dedupe savings… and maybe this is the issue, because we do have a little overprovisioning, but if it doesn’t carry over the saved blocks from the source… we have a problem 😊 It has also been a while since I worked with 7mode, but I think I recall that the dedupe savings should be migrated with snapmirror. Snapvault is another matter of cause…
Very strange indeed…
I think we will have to either delete some volumes to free up space… or maybe add disks to create a new aggregate we can move volumes to.. or maybe upgrade controller form FAS3240 to FAS3270 which allows us bigger aggregates 😊
/Heino
Fra: Alexander Griesser AGriesser@anexia-it.com Dato: fredag, 5. august 2022 kl. 16.36 Til: Heino Walther hw@beardmann.dk, toasters@teaparty.net toasters@teaparty.net Emne: AW: [EXT] Space issues on older NetApp... Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
aggr show_space -h
Maybe this gives you an indication of where the missing space is allocated at.
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Heino Walther Gesendet: Freitag, 5. August 2022 16:32 An: toasters@teaparty.net Betreff: [EXT] Space issues on older NetApp...
ATTENTION: This email originated from outside of the organisation. Do not click on links or open attachments unless you recognize the sender and know the content is safe
Hi there
We have two systems that mirror eachothers volumes via snapmirror. We are talking 7mode ONTAP 8.1.4 The two systems have the same controller: FAS3240 They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used… I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size… The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
Check fractional reserve. That can use up space.
Wayne
From: Toasters toasters-bounces@teaparty.net On Behalf Of Alexander Griesser Sent: Friday, August 5, 2022 8:36 AM To: Heino Walther hw@beardmann.dk; toasters@teaparty.net Subject: AW: [EXT] Space issues on older NetApp...
ATTENTION: This email originated outside of Shaw. Do not click links or open attachments unless you trust the sender and know the content is safe. Immediately report suspicious emails using the Phish Alert Report button. Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
aggr show_space -h
Maybe this gives you an indication of where the missing space is allocated at.
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: Toasters <toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net> Im Auftrag von Heino Walther Gesendet: Freitag, 5. August 2022 16:32 An: toasters@teaparty.netmailto:toasters@teaparty.net Betreff: [EXT] Space issues on older NetApp...
ATTENTION: This email originated from outside of the organisation. Do not click on links or open attachments unless you recognize the sender and know the content is safe
Hi there
We have two systems that mirror eachothers volumes via snapmirror. We are talking 7mode ONTAP 8.1.4 The two systems have the same controller: FAS3240 They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used… I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size… The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
Hi
All volumes are set to 0 in fractional reserve… so that’s not it 😊
/Heino
Fra: Wayne McCormick Wayne.McCormick@sjrb.ca Dato: fredag, 5. august 2022 kl. 16.59 Til: Alexander Griesser AGriesser@anexia-it.com, Heino Walther hw@beardmann.dk, toasters@teaparty.net toasters@teaparty.net Emne: RE: [EXT] Space issues on older NetApp... Check fractional reserve. That can use up space.
Wayne
From: Toasters toasters-bounces@teaparty.net On Behalf Of Alexander Griesser Sent: Friday, August 5, 2022 8:36 AM To: Heino Walther hw@beardmann.dk; toasters@teaparty.net Subject: AW: [EXT] Space issues on older NetApp...
ATTENTION: This email originated outside of Shaw. Do not click links or open attachments unless you trust the sender and know the content is safe. Immediately report suspicious emails using the Phish Alert Report button. Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
aggr show_space -h
Maybe this gives you an indication of where the missing space is allocated at.
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: Toasters <toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net> Im Auftrag von Heino Walther Gesendet: Freitag, 5. August 2022 16:32 An: toasters@teaparty.netmailto:toasters@teaparty.net Betreff: [EXT] Space issues on older NetApp...
ATTENTION: This email originated from outside of the organisation. Do not click on links or open attachments unless you recognize the sender and know the content is safe
Hi there
We have two systems that mirror eachothers volumes via snapmirror. We are talking 7mode ONTAP 8.1.4 The two systems have the same controller: FAS3240 They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used… I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size… The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
"Heino" == Heino Walther hw@beardmann.dk writes:
What about the aggregate reserve on the destination aggregates? I think it defaults to 5% (I could check on my existing 7-mode 8.1.3 system... but I'm on vacation. :-)
That might be enough space to get things rolling again.
Heino> All volumes are set to 0 in fractional reserve… so that’s not it 😊
Heino> Fra: Wayne McCormick Wayne.McCormick@sjrb.ca Heino> Dato: fredag, 5. august 2022 kl. 16.59 Heino> Til: Alexander Griesser AGriesser@anexia-it.com, Heino Walther hw@beardmann.dk, Heino> toasters@teaparty.net toasters@teaparty.net Heino> Emne: RE: [EXT] Space issues on older NetApp...
Heino> Check fractional reserve. That can use up space.
Heino> Wayne
Heino> From: Toasters toasters-bounces@teaparty.net On Behalf Of Alexander Griesser Heino> Sent: Friday, August 5, 2022 8:36 AM Heino> To: Heino Walther hw@beardmann.dk; toasters@teaparty.net Heino> Subject: AW: [EXT] Space issues on older NetApp...
Heino> ATTENTION: This email originated outside of Shaw. Do not click links or open attachments unless Heino> you trust the sender and know the content is safe. Immediately report suspicious emails using the Heino> Phish Alert Report button.
Heino> Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
Heino> aggr show_space -h
Heino> Maybe this gives you an indication of where the missing space is allocated at.
Heino> Best,
Heino> Alexander Griesser
Heino> Head of Systems Operations
Heino> ANEXIA Internetdienstleistungs GmbH
Heino> E-Mail: AGriesser@anexia-it.com
Heino> Web: http://www.anexia-it.com
Heino> Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Heino> Geschäftsführer: Alexander Windbichler
Heino> Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Heino> Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Heino Walther Heino> Gesendet: Freitag, 5. August 2022 16:32 Heino> An: toasters@teaparty.net Heino> Betreff: [EXT] Space issues on older NetApp...
Heino> ATTENTION: This email originated from outside of the organisation. Do not click on links or open Heino> attachments unless you recognize the sender and know the content is safe
Heino> Hi there
Heino> We have two systems that mirror eachothers volumes via snapmirror.
Heino> We are talking 7mode ONTAP 8.1.4
Heino> The two systems have the same controller: FAS3240
Heino> They have the same disks and aggregate configuration (70TB aggregates)
Heino> On the source side we use volumes that are thin-provisioned with LUNs that have space reservation Heino> enabled, the LUNs are mostly close to 16TB (which is max)
Heino> All volumes are snapmirrored to volumes on the destination system with the same size and placed on Heino> the same aggregates that mirror the source aggregates in size…
Heino> The aggregates on the source are all below 95% used.
Heino> Yet.. we are now at the situation where a few destination aggregates are 100% full, while the Heino> source aggregates are still under 95% used…
Heino> I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the Heino> same…
Heino> Can anyone explain why this can happen?
Heino> We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as Heino> they are max size…
Heino> The only think I can think of is either delete a volume from the affected aggregates, and re-sync Heino> the volume and hope it doesn’t fill up again…
Heino> Another way would be to add disks and build a new aggregate, and move some of the volumes…
Heino> Is there something I have missed? 😊
Heino> /Heino
Heino> _______________________________________________ Heino> Toasters mailing list Heino> Toasters@teaparty.net Heino> https://www.teaparty.net/mailman/listinfo/toasters
Hi John,
Enjoy your vacation 😊 You are right, in that version of ONTAP 5% was default aggregate snap reserve, but it was already set to 0 at creation 😊
I did find a pair of FAS3270 which I think will take over from the FAS3240, and allow us to add more disks to the aggregates… It’s not a nice solution, but it is a solution where we do not have to move a lot of data around… 😊
/Heino
Fra: John Stoffel john@stoffel.org Dato: fredag, 5. august 2022 kl. 22.22 Til: Heino Walther hw@beardmann.dk Cc: Wayne McCormick Wayne.McCormick@sjrb.ca, Alexander Griesser AGriesser@anexia-it.com, toasters@teaparty.net toasters@teaparty.net Emne: Re: SV: [EXT] Space issues on older NetApp...
"Heino" == Heino Walther hw@beardmann.dk writes:
What about the aggregate reserve on the destination aggregates? I think it defaults to 5% (I could check on my existing 7-mode 8.1.3 system... but I'm on vacation. :-)
That might be enough space to get things rolling again.
Heino> All volumes are set to 0 in fractional reserve… so that’s not it 😊
Heino> Fra: Wayne McCormick Wayne.McCormick@sjrb.ca Heino> Dato: fredag, 5. august 2022 kl. 16.59 Heino> Til: Alexander Griesser AGriesser@anexia-it.com, Heino Walther hw@beardmann.dk, Heino> toasters@teaparty.net toasters@teaparty.net Heino> Emne: RE: [EXT] Space issues on older NetApp...
Heino> Check fractional reserve. That can use up space.
Heino> Wayne
Heino> From: Toasters toasters-bounces@teaparty.net On Behalf Of Alexander Griesser Heino> Sent: Friday, August 5, 2022 8:36 AM Heino> To: Heino Walther hw@beardmann.dk; toasters@teaparty.net Heino> Subject: AW: [EXT] Space issues on older NetApp...
Heino> ATTENTION: This email originated outside of Shaw. Do not click links or open attachments unless Heino> you trust the sender and know the content is safe. Immediately report suspicious emails using the Heino> Phish Alert Report button.
Heino> Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
Heino> aggr show_space -h
Heino> Maybe this gives you an indication of where the missing space is allocated at.
Heino> Best,
Heino> Alexander Griesser
Heino> Head of Systems Operations
Heino> ANEXIA Internetdienstleistungs GmbH
Heino> E-Mail: AGriesser@anexia-it.com
Heino> Web: http://www.anexia-it.com
Heino> Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Heino> Geschäftsführer: Alexander Windbichler
Heino> Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Heino> Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Heino Walther Heino> Gesendet: Freitag, 5. August 2022 16:32 Heino> An: toasters@teaparty.net Heino> Betreff: [EXT] Space issues on older NetApp...
Heino> ATTENTION: This email originated from outside of the organisation. Do not click on links or open Heino> attachments unless you recognize the sender and know the content is safe
Heino> Hi there
Heino> We have two systems that mirror eachothers volumes via snapmirror.
Heino> We are talking 7mode ONTAP 8.1.4
Heino> The two systems have the same controller: FAS3240
Heino> They have the same disks and aggregate configuration (70TB aggregates)
Heino> On the source side we use volumes that are thin-provisioned with LUNs that have space reservation Heino> enabled, the LUNs are mostly close to 16TB (which is max)
Heino> All volumes are snapmirrored to volumes on the destination system with the same size and placed on Heino> the same aggregates that mirror the source aggregates in size…
Heino> The aggregates on the source are all below 95% used.
Heino> Yet.. we are now at the situation where a few destination aggregates are 100% full, while the Heino> source aggregates are still under 95% used…
Heino> I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the Heino> same…
Heino> Can anyone explain why this can happen?
Heino> We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as Heino> they are max size…
Heino> The only think I can think of is either delete a volume from the affected aggregates, and re-sync Heino> the volume and hope it doesn’t fill up again…
Heino> Another way would be to add disks and build a new aggregate, and move some of the volumes…
Heino> Is there something I have missed? 😊
Heino> /Heino
Heino> _______________________________________________ Heino> Toasters mailing list Heino> Toasters@teaparty.net Heino> https://www.teaparty.net/mailman/listinfo/toasters
"Heino" == Heino Walther hw@beardmann.dk writes:
Heino> Enjoy your vacation 😊
I am! :-) It's stupid hot and muggy here, so I'm hiding in the basement hacking on stuff. Fun!
Heino> You are right, in that version of ONTAP 5% was default Heino> aggregate snap reserve, but it was already set to 0 at creation Heino> 😊
I figured you must have. I wonder if your volumes are deduped at the aggregate level on the source, but since they transfer individually, you lose that dedupe savings.
Heino> I did find a pair of FAS3270 which I think will take over from Heino> the FAS3240, and allow us to add more disks to the aggregates…
Must be nice having that kind of spare hardware around. I'm trying to replace some old 3050s that they're too cheap to spend the money on the needed solutions.
Heino> It’s not a nice solution, but it is a solution where we do not Heino> have to move a lot of data around… 😊
That's a plus. A head swap should be trivial in this case.
Heino> Fra: John Stoffel john@stoffel.org Heino> Dato: fredag, 5. august 2022 kl. 22.22 Heino> Til: Heino Walther hw@beardmann.dk Heino> Cc: Wayne McCormick Wayne.McCormick@sjrb.ca, Alexander Griesser AGriesser@anexia-it.com, Heino> toasters@teaparty.net toasters@teaparty.net Heino> Emne: Re: SV: [EXT] Space issues on older NetApp...
"Heino" == Heino Walther hw@beardmann.dk writes:
Heino> What about the aggregate reserve on the destination aggregates? I Heino> think it defaults to 5% (I could check on my existing 7-mode 8.1.3 Heino> system... but I'm on vacation. :-)
Heino> That might be enough space to get things rolling again.
Heino> All volumes are set to 0 in fractional reserve… so that’s not it 😊
Heino> Fra: Wayne McCormick Wayne.McCormick@sjrb.ca Heino> Dato: fredag, 5. august 2022 kl. 16.59 Heino> Til: Alexander Griesser AGriesser@anexia-it.com, Heino Walther hw@beardmann.dk, Heino> toasters@teaparty.net toasters@teaparty.net Heino> Emne: RE: [EXT] Space issues on older NetApp...
Heino> Check fractional reserve. That can use up space.
Heino> Wayne
Heino> From: Toasters toasters-bounces@teaparty.net On Behalf Of Alexander Griesser Heino> Sent: Friday, August 5, 2022 8:36 AM Heino> To: Heino Walther hw@beardmann.dk; toasters@teaparty.net Heino> Subject: AW: [EXT] Space issues on older NetApp...
Heino> ATTENTION: This email originated outside of Shaw. Do not click links or open attachments Heino> unless Heino> you trust the sender and know the content is safe. Immediately report suspicious emails Heino> using the Heino> Phish Alert Report button.
Heino> Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
Heino> aggr show_space -h
Heino> Maybe this gives you an indication of where the missing space is allocated at.
Heino> Best,
Heino> Alexander Griesser
Heino> Head of Systems Operations
Heino> ANEXIA Internetdienstleistungs GmbH
Heino> E-Mail: AGriesser@anexia-it.com
Heino> Web: http://www.anexia-it.com
Heino> Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Heino> Geschäftsführer: Alexander Windbichler
Heino> Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Heino> Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Heino Walther Heino> Gesendet: Freitag, 5. August 2022 16:32 Heino> An: toasters@teaparty.net Heino> Betreff: [EXT] Space issues on older NetApp...
Heino> ATTENTION: This email originated from outside of the organisation. Do not click on links or Heino> open Heino> attachments unless you recognize the sender and know the content is safe
Heino> Hi there
Heino> We have two systems that mirror eachothers volumes via snapmirror.
Heino> We are talking 7mode ONTAP 8.1.4
Heino> The two systems have the same controller: FAS3240
Heino> They have the same disks and aggregate configuration (70TB aggregates)
Heino> On the source side we use volumes that are thin-provisioned with LUNs that have space Heino> reservation Heino> enabled, the LUNs are mostly close to 16TB (which is max)
Heino> All volumes are snapmirrored to volumes on the destination system with the same size and Heino> placed on Heino> the same aggregates that mirror the source aggregates in size…
Heino> The aggregates on the source are all below 95% used.
Heino> Yet.. we are now at the situation where a few destination aggregates are 100% full, while Heino> the Heino> source aggregates are still under 95% used…
Heino> I have checked almost everything, like aggregate snapshot reserves etc.. but they should Heino> be the Heino> same…
Heino> Can anyone explain why this can happen?
Heino> We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the Heino> aggregates as Heino> they are max size…
Heino> The only think I can think of is either delete a volume from the affected aggregates, and Heino> re-sync Heino> the volume and hope it doesn’t fill up again…
Heino> Another way would be to add disks and build a new aggregate, and move some of the volumes…
Heino> Is there something I have missed? 😊
Heino> /Heino
Heino> _______________________________________________ Heino> Toasters mailing list Heino> Toasters@teaparty.net Heino> https://www.teaparty.net/mailman/listinfo/toasters
The two controllers are configured the same with 5 aggregates of 70TB each, and volumes also match from one controller to the other, so say vol01-05 are placed on aggr0 on both systems, and snapmirrors are setup beteen the volumes… Which just makes it that much stranger 😉
This has been running fine for several years, and just started to complain… only change to the setup was a creation of a snap schedule on the source side where we wanted to keep two daily snapshots (which we didn’t before)… Again… the source system with the snapshots is OK and not full… it is the destination that has run full… The destination only carries the snapmirrors and nothing else, and the two systems are deliberately setup to be similar to each other…
I think you should be able to find a 32xx series controller on ebay for cheap by now… and as long as you can live with the older ONTAPs you can get away with sharing licenses (the shorter version license)… I think they started to lock the licenses to the controller serial from ONTAP 8.2 onwards … Of cause no support etc.. but since the products are long since out of service, I honestly don’t think NetApp will mind you sharing old licenses 😉
/Heino
Fra: John Stoffel john@stoffel.org Dato: fredag, 5. august 2022 kl. 22.28 Til: Heino Walther hw@beardmann.dk Cc: John Stoffel john@stoffel.org, Wayne McCormick Wayne.McCormick@sjrb.ca, Alexander Griesser AGriesser@anexia-it.com, toasters@teaparty.net toasters@teaparty.net Emne: Re: SV: SV: [EXT] Space issues on older NetApp...
"Heino" == Heino Walther hw@beardmann.dk writes:
Heino> Enjoy your vacation 😊
I am! :-) It's stupid hot and muggy here, so I'm hiding in the basement hacking on stuff. Fun!
Heino> You are right, in that version of ONTAP 5% was default Heino> aggregate snap reserve, but it was already set to 0 at creation Heino> 😊
I figured you must have. I wonder if your volumes are deduped at the aggregate level on the source, but since they transfer individually, you lose that dedupe savings.
Heino> I did find a pair of FAS3270 which I think will take over from Heino> the FAS3240, and allow us to add more disks to the aggregates…
Must be nice having that kind of spare hardware around. I'm trying to replace some old 3050s that they're too cheap to spend the money on the needed solutions.
Heino> It’s not a nice solution, but it is a solution where we do not Heino> have to move a lot of data around… 😊
That's a plus. A head swap should be trivial in this case.
Heino> Fra: John Stoffel john@stoffel.org Heino> Dato: fredag, 5. august 2022 kl. 22.22 Heino> Til: Heino Walther hw@beardmann.dk Heino> Cc: Wayne McCormick Wayne.McCormick@sjrb.ca, Alexander Griesser AGriesser@anexia-it.com, Heino> toasters@teaparty.net toasters@teaparty.net Heino> Emne: Re: SV: [EXT] Space issues on older NetApp...
"Heino" == Heino Walther hw@beardmann.dk writes:
Heino> What about the aggregate reserve on the destination aggregates? I Heino> think it defaults to 5% (I could check on my existing 7-mode 8.1.3 Heino> system... but I'm on vacation. :-)
Heino> That might be enough space to get things rolling again.
Heino> All volumes are set to 0 in fractional reserve… so that’s not it 😊
Heino> Fra: Wayne McCormick Wayne.McCormick@sjrb.ca Heino> Dato: fredag, 5. august 2022 kl. 16.59 Heino> Til: Alexander Griesser AGriesser@anexia-it.com, Heino Walther hw@beardmann.dk, Heino> toasters@teaparty.net toasters@teaparty.net Heino> Emne: RE: [EXT] Space issues on older NetApp...
Heino> Check fractional reserve. That can use up space.
Heino> Wayne
Heino> From: Toasters toasters-bounces@teaparty.net On Behalf Of Alexander Griesser Heino> Sent: Friday, August 5, 2022 8:36 AM Heino> To: Heino Walther hw@beardmann.dk; toasters@teaparty.net Heino> Subject: AW: [EXT] Space issues on older NetApp...
Heino> ATTENTION: This email originated outside of Shaw. Do not click links or open attachments Heino> unless Heino> you trust the sender and know the content is safe. Immediately report suspicious emails Heino> using the Heino> Phish Alert Report button.
Heino> Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
Heino> aggr show_space -h
Heino> Maybe this gives you an indication of where the missing space is allocated at.
Heino> Best,
Heino> Alexander Griesser
Heino> Head of Systems Operations
Heino> ANEXIA Internetdienstleistungs GmbH
Heino> E-Mail: AGriesser@anexia-it.com
Heino> Web: http://www.anexia-it.com
Heino> Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Heino> Geschäftsführer: Alexander Windbichler
Heino> Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Heino> Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Heino Walther Heino> Gesendet: Freitag, 5. August 2022 16:32 Heino> An: toasters@teaparty.net Heino> Betreff: [EXT] Space issues on older NetApp...
Heino> ATTENTION: This email originated from outside of the organisation. Do not click on links or Heino> open Heino> attachments unless you recognize the sender and know the content is safe
Heino> Hi there
Heino> We have two systems that mirror eachothers volumes via snapmirror.
Heino> We are talking 7mode ONTAP 8.1.4
Heino> The two systems have the same controller: FAS3240
Heino> They have the same disks and aggregate configuration (70TB aggregates)
Heino> On the source side we use volumes that are thin-provisioned with LUNs that have space Heino> reservation Heino> enabled, the LUNs are mostly close to 16TB (which is max)
Heino> All volumes are snapmirrored to volumes on the destination system with the same size and Heino> placed on Heino> the same aggregates that mirror the source aggregates in size…
Heino> The aggregates on the source are all below 95% used.
Heino> Yet.. we are now at the situation where a few destination aggregates are 100% full, while Heino> the Heino> source aggregates are still under 95% used…
Heino> I have checked almost everything, like aggregate snapshot reserves etc.. but they should Heino> be the Heino> same…
Heino> Can anyone explain why this can happen?
Heino> We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the Heino> aggregates as Heino> they are max size…
Heino> The only think I can think of is either delete a volume from the affected aggregates, and Heino> re-sync Heino> the volume and hope it doesn’t fill up again…
Heino> Another way would be to add disks and build a new aggregate, and move some of the volumes…
Heino> Is there something I have missed? 😊
Heino> /Heino
Heino> _______________________________________________ Heino> Toasters mailing list Heino> Toasters@teaparty.net Heino> https://www.teaparty.net/mailman/listinfo/toasters
My first guess would be that you might somehow be retaining more snapshots on the destination, perhaps at the volume level? snap list for all the volumes is identical on source and destination? Does df -h on both sides show which volumes, if any, have a size discrepancy?
Maybe a 2nd guess would be that the space guarantee settings are different on the destination volumes.
________________________________ From: Toasters toasters-bounces@teaparty.net on behalf of Heino Walther hw@beardmann.dk Sent: Friday, August 5, 2022 7:31 AM To: toasters@teaparty.net toasters@teaparty.net Subject: Space issues on older NetApp...
Hi there
We have two systems that mirror eachothers volumes via snapmirror.
We are talking 7mode ONTAP 8.1.4
The two systems have the same controller: FAS3240
They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used…
I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size…
The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? ??
/Heino
Since this is snapmirror, you can’t really have a different number of snapshots between the two volumes… you would have to use snapvault for that… Of cause there can be more snapshots on the source, until the destination catches up…
Again, because this all snapmirrored volumes, pretty much all the settings from the source are mirrored.. also space guarantee and fractional reserve etc… but of cause I checked…
/Heino
Fra: Timothy Naple tnaple@vectordata.com Dato: fredag, 5. august 2022 kl. 17.00 Til: Heino Walther hw@beardmann.dk, toasters@teaparty.net toasters@teaparty.net Emne: Re: Space issues on older NetApp... My first guess would be that you might somehow be retaining more snapshots on the destination, perhaps at the volume level? snap list for all the volumes is identical on source and destination? Does df -h on both sides show which volumes, if any, have a size discrepancy?
Maybe a 2nd guess would be that the space guarantee settings are different on the destination volumes. ________________________________ From: Toasters toasters-bounces@teaparty.net on behalf of Heino Walther hw@beardmann.dk Sent: Friday, August 5, 2022 7:31 AM To: toasters@teaparty.net toasters@teaparty.net Subject: Space issues on older NetApp...
Hi there
We have two systems that mirror eachothers volumes via snapmirror.
We are talking 7mode ONTAP 8.1.4
The two systems have the same controller: FAS3240
They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used…
I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size…
The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
I do remember faintly the Dedupe Metadata bug, that was persistent through quite a few 8.1 releases. It would cause "inexplicable" space loss, because the metadata would just grow and grow...
Are all controllers (source & destination) on the same patch release (8.1.4P10?)? Is there a reason not to update them to 8.2.5P5?
IIRC 8.2.5 at the latest had the problem solved...
/Sebastian
On 05.08.2022 16:31, Heino Walther wrote:
Hi there
We have two systems that mirror eachothers volumes via snapmirror.
We are talking 7mode ONTAP 8.1.4
The two systems have the same controller: FAS3240
They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used…
I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size…
The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed? 😊
/Heino
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters