Hi
All volumes are set to 0 in fractional reserve… so that’s not it
😊
/Heino
Fra:
Wayne McCormick <Wayne.McCormick@sjrb.ca>
Dato: fredag, 5. august 2022 kl. 16.59
Til: Alexander Griesser <AGriesser@anexia-it.com>, Heino Walther <hw@beardmann.dk>, toasters@teaparty.net <toasters@teaparty.net>
Emne: RE: [EXT] Space issues on older NetApp...
Check fractional reserve. That can use up space.
Wayne
From: Toasters <toasters-bounces@teaparty.net>
On Behalf Of Alexander Griesser
Sent: Friday, August 5, 2022 8:36 AM
To: Heino Walther <hw@beardmann.dk>; toasters@teaparty.net
Subject: AW: [EXT] Space issues on older NetApp...
ATTENTION: This email originated outside of Shaw. Do not click links or open attachments unless you trust the sender and know the content
is safe. Immediately report suspicious emails using the Phish Alert Report button.
Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
aggr show_space -h
Maybe this gives you an indication of where the missing space is allocated at.
Best,
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail:
AGriesser@anexia-it.com
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: Toasters <toasters-bounces@teaparty.net>
Im Auftrag von Heino Walther
Gesendet: Freitag, 5. August 2022 16:32
An: toasters@teaparty.net
Betreff: [EXT] Space issues on older NetApp...
ATTENTION: This email originated from outside of the organisation. Do not click
on links or open attachments unless you recognize the sender and know the content is safe
Hi there
We have two systems that mirror eachothers volumes via snapmirror.
We are talking 7mode ONTAP 8.1.4
The two systems have the same controller: FAS3240
They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used…
I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size…
The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed?
😊
/Heino