We do usually partition the disks with ADPv2 and add one data partition from each disk to one aggregate in a new raid group. You can manually partition the drives and force the mandatory root partition to be as small as possible (depending on the drive size).
Depending on the size of your SSDs, it might be posible to go even down further on the number 9344:
disk assign -disk 4.12.* -owner YOURNODE1 node run -node YOURNODE1 disk show # to find the disk number, 0d.12.* here in this example priv set diag disk partition -n 3 -i 3 -b 9344 0d.12.0 disk partition -n 3 -i 3 -b 9344 0d.12.1 ...
This will cause the new drives to be partitioned with the smallest possible root partition and two equally big data partitions. For 3.84TB SSDs, the layout looks like this then:
CLUSTER::*> disk partition show -partition 4.12.0.* Usable Container Container Partition Size Type Name Owner ------------------------- ------- ------------- ----------------- ----------------- 4.12.0.P1 1.75TB spare Pool0 YOURNODE1 4.12.0.P2 1.75TB spare Pool0 YOURNODE1 4.12.0.P3 28.75MB spare Pool0 YOURNODE1 3 entries were displayed.
Then assign the partitions to your nodes:
disk partition assign 4.12.*.P1 -owner YOURNODE1 -force true disk partition assign 4.12.*.P2 -owner YOURNODE2 -force true
Verify the correct assignment:
CLUSTER::*> disk show -fields data1-owner,data2-owner,root-owner disk data1-owner data2-owner root-owner ------- ----------- ----------- ---------- 4.12.0 YOURNODE1 YOURNODE2 - 4.12.1 YOURNODE1 YOURNODE2 - 4.12.2 YOURNODE1 YOURNODE2 - 4.12.3 YOURNODE1 YOURNODE2 - 4.12.4 YOURNODE1 YOURNODE2 - 4.12.5 YOURNODE1 YOURNODE2 - 4.12.6 YOURNODE1 YOURNODE2 - 4.12.7 YOURNODE1 YOURNODE2 - 4.12.8 YOURNODE1 YOURNODE2 - 4.12.9 YOURNODE1 YOURNODE2 - 4.12.10 YOURNODE1 YOURNODE2 - 4.12.11 YOURNODE1 YOURNODE2 -
You can then add those partitions to a new raid group with the same rg size to the existing aggregates and will gain maximum striping across the disks.
If you made an error, just unpartition the disk in the node shell:
node run -node YOURNODE1 priv set diag disk unpartition 0d.12.0
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.com Web: http://www.anexia-it.com
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
-----Ursprüngliche Nachricht----- Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Rue, Randy Gesendet: Freitag, 27. August 2021 19:35 An: Toasters toasters@teaparty.net Betreff: [EXTERNAL] options for adding a shelf/disks to an AFF-A220
ACHTUNG: Diese E-Mail stammt von einem externen Absender. Bitte vermeide es, Anhänge oder externe Links zu öffnen.
Hi All,
I've cabled up the second external shelf for our AFF-A220 and it sees the disks. Now I'm at a fork in the road.
We currently have two local data tiers made up of 24 disks each, one on each node.
Best practices at https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-psm... say a disk group of SSDs should be from 20-28 SSDs each.
If I go to create a new local tier the default is to create two new tiers, one on each node. That would give me two disks groups of 12, below the best practice. How badly would performance suffer? Any other caveats? Reliability?
I could put them all on one disk group, and on one node? Bleah.
I could add half of them to each existing tier/disk group and exceed the best practice by eight SSDs. I believe I've seen discussions (get it?) claiming that top limit is flexible with SSDs, was it here? But when I add another shelf I'll have the same problem.
Seems like the clearest path forward for consistency and future expansions is to add two tiers with small disk groups. So, can anyone guess if I'll have a significant performance impact? Any other caveats?
Grateful for any guidance,
Randy in Seattle
_______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters