Hi John,

 

Enjoy your vacation 😊

You are right, in that version of ONTAP 5% was default aggregate snap reserve, but it was already set to 0 at creation 😊

 

I did find a pair of FAS3270 which I think will take over from the FAS3240, and allow us to add more disks to the aggregates…

It’s not a nice solution, but it is a solution where we do not have to move a lot of data around… 😊

 

/Heino

 

 

 

Fra: John Stoffel <john@stoffel.org>
Dato: fredag, 5. august 2022 kl. 22.22
Til: Heino Walther <hw@beardmann.dk>
Cc: Wayne McCormick <Wayne.McCormick@sjrb.ca>, Alexander Griesser <AGriesser@anexia-it.com>, toasters@teaparty.net <toasters@teaparty.net>
Emne: Re: SV: [EXT] Space issues on older NetApp...

>>>>> "Heino" == Heino Walther <hw@beardmann.dk> writes:

What about the aggregate reserve on the destination aggregates?  I
think it defaults to 5% (I could check on my existing 7-mode 8.1.3
system... but I'm on vacation.  :-)

That might be enough space to get things rolling again.

Heino> All volumes are set to 0 in fractional reserve… so that’s not it
😊



Heino> Fra: Wayne McCormick <Wayne.McCormick@sjrb.ca>
Heino> Dato: fredag, 5. august 2022 kl. 16.59
Heino> Til: Alexander Griesser <AGriesser@anexia-it.com>, Heino Walther <hw@beardmann.dk>,
Heino> toasters@teaparty.net <toasters@teaparty.net>
Heino> Emne: RE: [EXT] Space issues on older NetApp...

Heino> Check fractional reserve.  That can use up space.

Heino> Wayne

Heino> From: Toasters <toasters-bounces@teaparty.net> On Behalf Of Alexander Griesser
Heino> Sent: Friday, August 5, 2022 8:36 AM
Heino> To: Heino Walther <hw@beardmann.dk>; toasters@teaparty.net
Heino> Subject: AW: [EXT] Space issues on older NetApp...

Heino>   ATTENTION: This email originated outside of Shaw. Do not click links or open attachments unless
Heino> you trust the sender and know the content is safe. Immediately report suspicious emails using the
Heino> Phish Alert Report button.

Heino> Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:

Heino> aggr show_space -h

Heino> Maybe this gives you an indication of where the missing space is allocated at.

Heino> Best,

Heino> Alexander Griesser

Heino> Head of Systems Operations

Heino> ANEXIA Internetdienstleistungs GmbH

Heino> E-Mail: AGriesser@anexia-it.com

Heino> Web: http://www.anexia-it.com

Heino> Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Heino> Geschäftsführer: Alexander Windbichler

Heino> Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

Heino> Von: Toasters <toasters-bounces@teaparty.net> Im Auftrag von Heino Walther
Heino> Gesendet: Freitag, 5. August 2022 16:32
Heino> An: toasters@teaparty.net
Heino> Betreff: [EXT] Space issues on older NetApp...

Heino> ATTENTION: This email originated from outside of the organisation. Do not click on links or open
Heino> attachments unless you recognize the sender and know the content is safe

Heino> Hi there

Heino> We have two systems that mirror eachothers volumes via snapmirror.

Heino> We are talking 7mode ONTAP 8.1.4

Heino> The two systems have the same controller: FAS3240

Heino> They have the same disks and aggregate configuration (70TB aggregates)

Heino> On the source side we use volumes that are thin-provisioned with LUNs that have space reservation
Heino> enabled, the LUNs are mostly close to 16TB (which is max)

Heino> All volumes are snapmirrored to volumes on the destination system with the same size and placed on
Heino> the same aggregates that mirror the source aggregates in size…

Heino> The aggregates on the source are all below 95% used.

Heino> Yet.. we are now at the situation where a few destination aggregates are 100% full, while the
Heino> source aggregates are still under 95% used…

Heino> I have checked almost everything, like aggregate snapshot reserves etc..  but they should be the
Heino> same…

Heino> Can anyone explain why this can happen?

Heino> We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as
Heino> they are max size…

Heino> The only think I can think of is either delete a volume from the affected aggregates, and re-sync
Heino> the volume and hope it doesn’t fill up again…

Heino> Another way would be to add disks and build a new aggregate, and move some of the volumes…

Heino> Is there something I have missed?
😊

Heino> /Heino

Heino> _______________________________________________
Heino> Toasters mailing list
Heino> Toasters@teaparty.net
Heino> https://www.teaparty.net/mailman/listinfo/toasters