Well yes and no š
If I run aggr show_space on the source, I can see that SIS (dedupe) saves us for 2-3 TB per aggregate. If I run the same command on the destination, it doesnāt show any dedupe savingsā¦ and maybe this is the issue, because we do have a little overprovisioning, but if it doesnāt carry over the saved blocks from the sourceā¦ we have a problem š It has also been a while since I worked with 7mode, but I think I recall that the dedupe savings should be migrated with snapmirror. Snapvault is another matter of causeā¦
Very strange indeedā¦
I think we will have to either delete some volumes to free up spaceā¦ or maybe add disks to create a new aggregate we can move volumes to.. or maybe upgrade controller form FAS3240 to FAS3270 which allows us bigger aggregates š
/Heino
Fra: Alexander Griesser AGriesser@anexia-it.com Dato: fredag, 5. august 2022 kl. 16.36 Til: Heino Walther hw@beardmann.dk, toasters@teaparty.net toasters@teaparty.net Emne: AW: [EXT] Space issues on older NetApp... Quite some time out I had my fingers on a 7-mode netapp, but can you compare the output of:
aggr show_space -h
Maybe this gives you an indication of where the missing space is allocated at.
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: FeldkirchnerstraĆe 140, 9020 Klagenfurt GeschƤftsfĆ¼hrer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Heino Walther Gesendet: Freitag, 5. August 2022 16:32 An: toasters@teaparty.net Betreff: [EXT] Space issues on older NetApp...
ATTENTION: This email originated from outside of the organisation. Do not click on links or open attachments unless you recognize the sender and know the content is safe
Hi there
We have two systems that mirror eachothers volumes via snapmirror. We are talking 7mode ONTAP 8.1.4 The two systems have the same controller: FAS3240 They have the same disks and aggregate configuration (70TB aggregates)
On the source side we use volumes that are thin-provisioned with LUNs that have space reservation enabled, the LUNs are mostly close to 16TB (which is max)
All volumes are snapmirrored to volumes on the destination system with the same size and placed on the same aggregates that mirror the source aggregates in sizeā¦
The aggregates on the source are all below 95% used.
Yet.. we are now at the situation where a few destination aggregates are 100% full, while the source aggregates are still under 95% usedā¦ I have checked almost everything, like aggregate snapshot reserves etc.. but they should be the sameā¦
Can anyone explain why this can happen?
We are of cause at a ādeadlockā now.. I donāt think we can add anymore disks to the aggregates as they are max sizeā¦ The only think I can think of is either delete a volume from the affected aggregates, and re-sync the volume and hope it doesnāt fill up againā¦
Another way would be to add disks and build a new aggregate, and move some of the volumesā¦
Is there something I have missed? š
/Heino