Hi All,
I've cabled up the second external shelf for our AFF-A220 and it sees the disks. Now I'm at a fork in the road.
We currently have two local data tiers made up of 24 disks each, one on each node.
Best practices at https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-psm... say a disk group of SSDs should be from 20-28 SSDs each.
If I go to create a new local tier the default is to create two new tiers, one on each node. That would give me two disks groups of 12, below the best practice. How badly would performance suffer? Any other caveats? Reliability?
I could put them all on one disk group, and on one node? Bleah.
I could add half of them to each existing tier/disk group and exceed the best practice by eight SSDs. I believe I've seen discussions (get it?) claiming that top limit is flexible with SSDs, was it here? But when I add another shelf I'll have the same problem.
Seems like the clearest path forward for consistency and future expansions is to add two tiers with small disk groups. So, can anyone guess if I'll have a significant performance impact? Any other caveats?
Grateful for any guidance,
Randy in Seattle
Why create a new tier (aggregate)? Why not expand both existing local tiers with a RAID group of 12 (10+2)?
Then, when you add another shelf, you'll just expand the RAID groups from 10+2 to 22+2, expanding the local tiers again. (They can be up to 800TB in size)
Performance-wise with SSDs, the effect of a small RG-size is negligible (well, you shouldn't go down to 1+2 or even 3+2. But 10+2, you probably don't notice a thing, especially considering, that we're talking about a relatively small controller here.
IMHO the management advantages of having fewer aggregates/tiers would be the deciding factor here.
There's reasons for more local tiers, e.g. SnapLock, but I guess these do not come into play here.
Regarding your mention of "I could add half of them to each existing tier/disk group and exceed the best practice by eight SSDs.": No you can't. 26+2 is the maximum to which you can go per RAID group (technically you could also do 26+3). So keeping RGs balanced would be the goal here:
aggrx = (22+2) + (10+2)
for both nodes. Then you can expand with another shelf to (22+2) + (22+2) per node and will be perfectly balanced regarding RGs and Local Tiers...
My 2c
Sebastian
On 27.08.2021 19:34, Rue, Randy wrote:
Hi All,
I've cabled up the second external shelf for our AFF-A220 and it sees the disks. Now I'm at a fork in the road.
We currently have two local data tiers made up of 24 disks each, one on each node.
Best practices at https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-psm... say a disk group of SSDs should be from 20-28 SSDs each.
If I go to create a new local tier the default is to create two new tiers, one on each node. That would give me two disks groups of 12, below the best practice. How badly would performance suffer? Any other caveats? Reliability?
I could put them all on one disk group, and on one node? Bleah.
I could add half of them to each existing tier/disk group and exceed the best practice by eight SSDs. I believe I've seen discussions (get it?) claiming that top limit is flexible with SSDs, was it here? But when I add another shelf I'll have the same problem.
Seems like the clearest path forward for consistency and future expansions is to add two tiers with small disk groups. So, can anyone guess if I'll have a significant performance impact? Any other caveats?
Grateful for any guidance,
Randy in Seattle
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Gah. This morning half-awake I realized the same thing. I'm lumping raid groups with aggregates/tiers in my muddled head and should (have) just created new raid groups in the existing two main tiers. Wish we'd had this conversation before I created two new tiers and moved a bunch of constituent volumes.
I'll get to work:
* moving the CVs back to the old tiers before they grow too much.
* destroy the two new tiers
* use the freed disks to expand the existing tiers.
On 8/27/2021 1:35 PM, Sebastian Goetze wrote:
Why create a new tier (aggregate)? Why not expand both existing local tiers with a RAID group of 12 (10+2)?
Then, when you add another shelf, you'll just expand the RAID groups from 10+2 to 22+2, expanding the local tiers again. (They can be up to 800TB in size)
Performance-wise with SSDs, the effect of a small RG-size is negligible (well, you shouldn't go down to 1+2 or even 3+2. But 10+2, you probably don't notice a thing, especially considering, that we're talking about a relatively small controller here.
IMHO the management advantages of having fewer aggregates/tiers would be the deciding factor here.
There's reasons for more local tiers, e.g. SnapLock, but I guess these do not come into play here.
Regarding your mention of "I could add half of them to each existing tier/disk group and exceed the best practice by eight SSDs.": No you can't. 26+2 is the maximum to which you can go per RAID group (technically you could also do 26+3). So keeping RGs balanced would be the goal here:
aggrx = (22+2) + (10+2)
for both nodes. Then you can expand with another shelf to (22+2) + (22+2) per node and will be perfectly balanced regarding RGs and Local Tiers...
My 2c
Sebastian
On 27.08.2021 19:34, Rue, Randy wrote:
Hi All,
I've cabled up the second external shelf for our AFF-A220 and it sees the disks. Now I'm at a fork in the road.
We currently have two local data tiers made up of 24 disks each, one on each node.
Best practices at https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-psm... say a disk group of SSDs should be from 20-28 SSDs each.
If I go to create a new local tier the default is to create two new tiers, one on each node. That would give me two disks groups of 12, below the best practice. How badly would performance suffer? Any other caveats? Reliability?
I could put them all on one disk group, and on one node? Bleah.
I could add half of them to each existing tier/disk group and exceed the best practice by eight SSDs. I believe I've seen discussions (get it?) claiming that top limit is flexible with SSDs, was it here? But when I add another shelf I'll have the same problem.
Seems like the clearest path forward for consistency and future expansions is to add two tiers with small disk groups. So, can anyone guess if I'll have a significant performance impact? Any other caveats?
Grateful for any guidance,
Randy in Seattle
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
"Randy" == Randy Rue randyrue@gmail.com writes:
Randy> Hi All, Randy> I've cabled up the second external shelf for our AFF-A220 and it sees Randy> the disks. Now I'm at a fork in the road.
Randy> We currently have two local data tiers made up of 24 disks each, one on Randy> each node.
Randy> Best practices at Randy> https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-psm... Randy> say a disk group of SSDs should be from 20-28 SSDs each.
Don't you mean 'Raid Group' here? In any case, I'd just split the disks in half and add them to the aggregates on each head, to create some new RGs.
But it would help if you could show your current setup with:
storage aggregate show-status
I've got an a200 running 9.3.P10 (yes, I know it's old... sue me.) and we haven't had any issues, but I admit we haven't grown it either.
Randy> If I go to create a new local tier the default is to create two new Randy> tiers, one on each node. That would give me two disks groups of 12, Randy> below the best practice. How badly would performance suffer? Any other Randy> caveats? Reliability?
I don't think you'll notice any performance problems, the issue really comes down to overhead you lose due to having to dedicate more disks to parity.
Randy> I could put them all on one disk group, and on one node? Bleah.
Randy> I could add half of them to each existing tier/disk group and Randy> exceed the best practice by eight SSDs. I believe I've seen Randy> discussions (get it?) claiming that top limit is flexible with Randy> SSDs, was it here? But when I add another shelf I'll have the Randy> same problem.
Once you add another shelf after this shelf, you'll end up balancing things since you'll ideally fill your raidgroups to a larger number of data disks.
Randy> Seems like the clearest path forward for consistency and future Randy> expansions is to add two tiers with small disk groups. So, can Randy> anyone guess if I'll have a significant performance impact? Any Randy> other caveats?
I don't think you'll notice any problems, if only because the A220 isn't the highest performing system anyway. If you've got the time, setup a new aggregate on one head with the same setup as currently, then partly fill it and run some performance tests.
Good luck, John
Two of you just reminded me what I should have figured out earlier, see my other response.
Thanks!
On 8/27/2021 1:38 PM, John Stoffel wrote:
"Randy" == Randy Rue randyrue@gmail.com writes:
Randy> Hi All, Randy> I've cabled up the second external shelf for our AFF-A220 and it sees Randy> the disks. Now I'm at a fork in the road.
Randy> We currently have two local data tiers made up of 24 disks each, one on Randy> each node.
Randy> Best practices at Randy> https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-psm... Randy> say a disk group of SSDs should be from 20-28 SSDs each.
Don't you mean 'Raid Group' here? In any case, I'd just split the disks in half and add them to the aggregates on each head, to create some new RGs.
But it would help if you could show your current setup with:
storage aggregate show-status
I've got an a200 running 9.3.P10 (yes, I know it's old... sue me.) and we haven't had any issues, but I admit we haven't grown it either.
Randy> If I go to create a new local tier the default is to create two new Randy> tiers, one on each node. That would give me two disks groups of 12, Randy> below the best practice. How badly would performance suffer? Any other Randy> caveats? Reliability?
I don't think you'll notice any performance problems, the issue really comes down to overhead you lose due to having to dedicate more disks to parity.
Randy> I could put them all on one disk group, and on one node? Bleah.
Randy> I could add half of them to each existing tier/disk group and Randy> exceed the best practice by eight SSDs. I believe I've seen Randy> discussions (get it?) claiming that top limit is flexible with Randy> SSDs, was it here? But when I add another shelf I'll have the Randy> same problem.
Once you add another shelf after this shelf, you'll end up balancing things since you'll ideally fill your raidgroups to a larger number of data disks.
Randy> Seems like the clearest path forward for consistency and future Randy> expansions is to add two tiers with small disk groups. So, can Randy> anyone guess if I'll have a significant performance impact? Any Randy> other caveats?
I don't think you'll notice any problems, if only because the A220 isn't the highest performing system anyway. If you've got the time, setup a new aggregate on one head with the same setup as currently, then partly fill it and run some performance tests.
Good luck, John
We do usually partition the disks with ADPv2 and add one data partition from each disk to one aggregate in a new raid group. You can manually partition the drives and force the mandatory root partition to be as small as possible (depending on the drive size).
Depending on the size of your SSDs, it might be posible to go even down further on the number 9344:
disk assign -disk 4.12.* -owner YOURNODE1 node run -node YOURNODE1 disk show # to find the disk number, 0d.12.* here in this example priv set diag disk partition -n 3 -i 3 -b 9344 0d.12.0 disk partition -n 3 -i 3 -b 9344 0d.12.1 ...
This will cause the new drives to be partitioned with the smallest possible root partition and two equally big data partitions. For 3.84TB SSDs, the layout looks like this then:
CLUSTER::*> disk partition show -partition 4.12.0.* Usable Container Container Partition Size Type Name Owner ------------------------- ------- ------------- ----------------- ----------------- 4.12.0.P1 1.75TB spare Pool0 YOURNODE1 4.12.0.P2 1.75TB spare Pool0 YOURNODE1 4.12.0.P3 28.75MB spare Pool0 YOURNODE1 3 entries were displayed.
Then assign the partitions to your nodes:
disk partition assign 4.12.*.P1 -owner YOURNODE1 -force true disk partition assign 4.12.*.P2 -owner YOURNODE2 -force true
Verify the correct assignment:
CLUSTER::*> disk show -fields data1-owner,data2-owner,root-owner disk data1-owner data2-owner root-owner ------- ----------- ----------- ---------- 4.12.0 YOURNODE1 YOURNODE2 - 4.12.1 YOURNODE1 YOURNODE2 - 4.12.2 YOURNODE1 YOURNODE2 - 4.12.3 YOURNODE1 YOURNODE2 - 4.12.4 YOURNODE1 YOURNODE2 - 4.12.5 YOURNODE1 YOURNODE2 - 4.12.6 YOURNODE1 YOURNODE2 - 4.12.7 YOURNODE1 YOURNODE2 - 4.12.8 YOURNODE1 YOURNODE2 - 4.12.9 YOURNODE1 YOURNODE2 - 4.12.10 YOURNODE1 YOURNODE2 - 4.12.11 YOURNODE1 YOURNODE2 -
You can then add those partitions to a new raid group with the same rg size to the existing aggregates and will gain maximum striping across the disks.
If you made an error, just unpartition the disk in the node shell:
node run -node YOURNODE1 priv set diag disk unpartition 0d.12.0
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.com Web: http://www.anexia-it.com
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
-----Ursprüngliche Nachricht----- Von: Toasters toasters-bounces@teaparty.net Im Auftrag von Rue, Randy Gesendet: Freitag, 27. August 2021 19:35 An: Toasters toasters@teaparty.net Betreff: [EXTERNAL] options for adding a shelf/disks to an AFF-A220
ACHTUNG: Diese E-Mail stammt von einem externen Absender. Bitte vermeide es, Anhänge oder externe Links zu öffnen.
Hi All,
I've cabled up the second external shelf for our AFF-A220 and it sees the disks. Now I'm at a fork in the road.
We currently have two local data tiers made up of 24 disks each, one on each node.
Best practices at https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-psm... say a disk group of SSDs should be from 20-28 SSDs each.
If I go to create a new local tier the default is to create two new tiers, one on each node. That would give me two disks groups of 12, below the best practice. How badly would performance suffer? Any other caveats? Reliability?
I could put them all on one disk group, and on one node? Bleah.
I could add half of them to each existing tier/disk group and exceed the best practice by eight SSDs. I believe I've seen discussions (get it?) claiming that top limit is flexible with SSDs, was it here? But when I add another shelf I'll have the same problem.
Seems like the clearest path forward for consistency and future expansions is to add two tiers with small disk groups. So, can anyone guess if I'll have a significant performance impact? Any other caveats?
Grateful for any guidance,
Randy in Seattle
_______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters