The two controllers are configured the same with 5 aggregates of 70TB each, and volumes also match from one controller to the other, so say vol01-05 are placed on aggr0 on both systems, and snapmirrors are setup beteen the volumes…

Which just makes it that much stranger 😉

 

This has been running fine for several years, and just started to complain…  only change to the setup was a creation of a snap schedule on the source side where we wanted to keep two daily snapshots (which we didn’t before)…

Again… the source system with the snapshots is OK and not full… it is the destination that has run full…

The destination only carries the snapmirrors and nothing else, and the two systems are deliberately setup to be similar to each other…

 

I think you should be able to find a 32xx series controller on ebay for cheap by now… and as long as you can live with the older ONTAPs you can get away with sharing licenses (the shorter version license)… I think they started to lock the licenses to the controller serial from ONTAP 8.2 onwards …

Of cause no support etc.. but since the products are long since out of service, I honestly don’t think NetApp will mind you sharing old licenses 😉

 

/Heino

 

Fra: John Stoffel <john@stoffel.org>
Dato: fredag, 5. august 2022 kl. 22.28
Til: Heino Walther <hw@beardmann.dk>
Cc: John Stoffel <john@stoffel.org>, Wayne McCormick <Wayne.McCormick@sjrb.ca>, Alexander Griesser <AGriesser@anexia-it.com>, toasters@teaparty.net <toasters@teaparty.net>
Emne: Re: SV: SV: [EXT] Space issues on older NetApp...

>>>>> "Heino" == Heino Walther <hw@beardmann.dk> writes:

Heino> Enjoy your vacation
😊

I am!  :-)  It's stupid hot and muggy here, so I'm hiding in the
basement hacking on stuff.  Fun!

Heino> You are right, in that version of ONTAP 5% was default
Heino> aggregate snap reserve, but it was already set to 0 at creation
Heino>
😊

I figured you must have.  I wonder if your volumes are deduped at the
aggregate level on the source, but since they transfer individually,
you lose that dedupe savings. 

Heino> I did find a pair of FAS3270 which I think will take over from
Heino> the FAS3240, and allow us to add more disks to the aggregates…

Must be nice having that kind of spare hardware around.  I'm trying to
replace some old 3050s that they're too cheap to spend the money on
the needed solutions.

Heino> It’s not a nice solution, but it is a solution where we do not
Heino> have to move a lot of data around… 
😊

That's a plus.  A head swap should be trivial in this case.

Heino> Fra: John Stoffel <john@stoffel.org>
Heino> Dato: fredag, 5. august 2022 kl. 22.22
Heino> Til: Heino Walther <hw@beardmann.dk>
Heino> Cc: Wayne McCormick <Wayne.McCormick@sjrb.ca>, Alexander Griesser <AGriesser@anexia-it.com>,
Heino> toasters@teaparty.net <toasters@teaparty.net>
Heino> Emne: Re: SV: [EXT] Space issues on older NetApp...

>>>>> "Heino" == Heino Walther <hw@beardmann.dk> writes:

Heino> What about the aggregate reserve on the destination aggregates?  I
Heino> think it defaults to 5% (I could check on my existing 7-mode 8.1.3
Heino> system... but I'm on vacation.  :-)

Heino> That might be enough space to get things rolling again.

Heino> All volumes are set to 0 in fractional reserve… so that’s not it
😊

Heino> Fra: Wayne McCormick <Wayne.McCormick@sjrb.ca>
Heino> Dato: fredag, 5. august 2022 kl. 16.59
Heino> Til: Alexander Griesser <AGriesser@anexia-it.com>, Heino Walther <hw@beardmann.dk>,
Heino> toasters@teaparty.net <toasters@teaparty.net>
Heino> Emne: RE: [EXT] Space issues on older NetApp...

Heino> Check fractional reserve.  That can use up space.

Heino> Wayne

Heino> From: Toasters <toasters-bounces@teaparty.net> On Behalf Of Alexander Griesser
Heino> Sent: Friday, August 5, 2022 8:36 AM
Heino> To: Heino Walther <hw@beardmann.dk>; toasters@teaparty.net
Heino> Subject: AW: [EXT] Space issues on older NetApp...

Heino> ATTENTION: This email originated outside of Shaw. Do not click links or open attachments
Heino> unless
Heino> you trust the sender and know the content is safe. Immediately report suspicious emails
Heino> using the
Heino> Phish Alert Report button.

Heino> Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:

Heino> aggr show_space -h

Heino> Maybe this gives you an indication of where the missing space is allocated at.

Heino> Best,

Heino> Alexander Griesser

Heino> Head of Systems Operations

Heino> ANEXIA Internetdienstleistungs GmbH

Heino> E-Mail: AGriesser@anexia-it.com

Heino> Web: http://www.anexia-it.com

Heino> Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Heino> Geschäftsführer: Alexander Windbichler

Heino> Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

Heino> Von: Toasters <toasters-bounces@teaparty.net> Im Auftrag von Heino Walther
Heino> Gesendet: Freitag, 5. August 2022 16:32
Heino> An: toasters@teaparty.net
Heino> Betreff: [EXT] Space issues on older NetApp...

Heino> ATTENTION: This email originated from outside of the organisation. Do not click on links or
Heino> open
Heino> attachments unless you recognize the sender and know the content is safe

Heino> Hi there

Heino> We have two systems that mirror eachothers volumes via snapmirror.

Heino> We are talking 7mode ONTAP 8.1.4

Heino> The two systems have the same controller: FAS3240

Heino> They have the same disks and aggregate configuration (70TB aggregates)

Heino> On the source side we use volumes that are thin-provisioned with LUNs that have space
Heino> reservation
Heino> enabled, the LUNs are mostly close to 16TB (which is max)

Heino> All volumes are snapmirrored to volumes on the destination system with the same size and
Heino> placed on
Heino> the same aggregates that mirror the source aggregates in size…

Heino> The aggregates on the source are all below 95% used.

Heino> Yet.. we are now at the situation where a few destination aggregates are 100% full, while
Heino> the
Heino> source aggregates are still under 95% used…

Heino> I have checked almost everything, like aggregate snapshot reserves etc..  but they should
Heino> be the
Heino> same…

Heino> Can anyone explain why this can happen?

Heino> We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the
Heino> aggregates as
Heino> they are max size…

Heino> The only think I can think of is either delete a volume from the affected aggregates, and
Heino> re-sync
Heino> the volume and hope it doesn’t fill up again…

Heino> Another way would be to add disks and build a new aggregate, and move some of the volumes…

Heino> Is there something I have missed?
😊

Heino> /Heino

Heino> _______________________________________________
Heino> Toasters mailing list
Heino> Toasters@teaparty.net
Heino> https://www.teaparty.net/mailman/listinfo/toasters