Well yes and no
😊
If I run aggr show_space on the source, I can see that SIS (dedupe) saves us for 2-3 TB per aggregate.
If I run the same command on the destination, it doesn’t show any dedupe savings… and maybe this is the issue, because we do have a little overprovisioning, but if
it doesn’t carry over the saved blocks from the source… we have a problem
😊
It has also been a while since I worked with 7mode, but I think I recall that the dedupe savings should be migrated with snapmirror. Snapvault is another matter of
cause…
Very strange indeed…
I think we will have to either delete some volumes to free up space… or maybe add disks to create a new aggregate we can move volumes to.. or maybe upgrade controller
form FAS3240 to FAS3270 which allows us bigger aggregates 😊
/Heino
Fra:
Alexander Griesser <AGriesser@anexia-it.com>
Dato: fredag, 5. august 2022 kl. 16.36
Til: Heino Walther <hw@beardmann.dk>, toasters@teaparty.net <toasters@teaparty.net>
Emne: AW: [EXT] Space issues on older NetApp...
Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
aggr show_space -h
Maybe this gives you an indication of where the missing space is allocated at.
Best,
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail:
AGriesser@anexia-it.com
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: Toasters <toasters-bounces@teaparty.net>
Im Auftrag von Heino Walther
Gesendet: Freitag, 5. August 2022 16:32
An: toasters@teaparty.net
Betreff: [EXT] Space issues on older NetApp...
ATTENTION: This email originated from outside of the organisation. Do not click
on links or open attachments unless you recognize the sender and know the content is safe
Hi there
We have two systems that mirror eachothers volumes via snapmirror.
We are talking 7mode ONTAP 8.1.4
The two systems have the same controller: FAS3240
They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in size…
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% used…
I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the same…
Can anyone explain why this can happen?
We are of cause at a “deadlock” now.. I don’t think we can add anymore disks to the aggregates as they are max size…
The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesn’t fill up again…
Another way would be to add disks and build a new aggregate, and move some of the volumes…
Is there something I have missed?
😊
/Heino