Using "whole" or non-partitioned drives will use a minimum of 6 full drives (at least to start, P+P+D with RAID-DP and drive size <10TB). If the drive size is 10T or larger, RAID-TEC is used instead.
Not-knowing the size of the drives/SSDs in question make this a little difficult to answer. It is trivial to get around the whole-drive/partitioned drive when adding in the future. Set the "maxraidsize" to whatever is in the raidgroup now. any future drives are always added as whole drives unless they are pre-partitioned.
--tmac
*Tim McCarthy, **Principal Consultant*
*Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam*
*I Blog at TMACsRack https://tmacsrack.wordpress.com/*
On Mon, Apr 26, 2021 at 10:31 AM Heino Walther hw@beardmann.dk wrote:
Hi Randy
There are two ways you can go… the correct way, and the “hacky” way…
When you boot up the system with the serial or Service-Processor attached you can press “Ctrl-C” to get into the Maintaince menu from where you can Zero all disks on the system.
This will build two root aggregates, one for each controller. They are typically build from 4 disks that are partitioned as “Root-Data-Data” disks, that is a small Root partition and two equal sized Data partitions.
Disk sizes of these may vary dependent on your disks and your controller model. (I think more disks are used… maybe 8…)
All other disks are not touched, and are therefore spares, and the 2 x 4 data partitions are also presented as spare “disks” half the size of the physical disk (minus the small root partition).
Each controller require a root aggregate, no matter what.
If you would like to have just one data aggregate on one of the controllers, you can do so, but be aware that if you start your aggregate with the partitioned disks, and add non-partitioned disks later on, the disks will most likely be partitioned by default, and the other partition is assigned to the partner controller.
The way to get around this, is to either not use the partitioned disks, and just start your new aggregate with unpartitioned disks, which you will have to assign to one of the controllers.
If you would like to use the partitioned disks, you can create a new raidgroup in the same aggregate using the partitions from one of systems….
You will then have the partitioned that are assigned to the passive node… (this is where it gets hacky)…
You are infact able to assign these partitioned to the same node, and you are able to add them to the same RAID group as the other partitions… so a RAID group consisting of partitions of the same disk 😊
If one disk fails you will have two partitions fail inside your RAID group… which is a bit scary for me… so I would suggest to create a separate RAID group for them…
So: Example.. system with 24 disks… lets say the disks are 10TB large…
Ctrl-A:
RootAggr: Consists of 4 partitions (10G each)
Data Aggr: Consists of:
RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
Ctrl-B:
RootAggr: Consists of 4 partitions (10G each)
I hope this makes sense… keep in mind that this example is not how it is going to look, as there will be more partitioned disks, because as a minimum the system needs 4 partitions for the root aggregate… so is will more likely be 8 disks that are partitioned…
(I just didn’t want to correct the numbers above)
Now… that being said… there are ways to limit the number of partitions or disks used for the root aggregate…. Once you are up and running per default, you can create two new root aggregates that are either smaller in size, or using RAID4 instead of RAID-DP… of cause with the increased risk as a disk dies…. (the information on the root aggregate can and should be backed up, and is pretty easy to restore if they should fail). Your data aggregates should be no less than RAID-DP.
The way to create smaller root aggregates and to create your own partition sizes includes fairly hairy commands which require “priv set diag”… so unless you know what you are doing, I would suggest against it. (but it is possible)
Basically I would suggest the default with a two node half and half setup… maybe you can use some other features in order to spread out your load on both aggregates? I am pretty sure that if you are just using CIFS or NFS, you should be able to “merge” two volumes (one from each controller) into one logical workspace… But since I have not been working with this that much, I would let someone else explain this part… (I’m pretty sure you can set this up from the GUI even…)
/Heino
*Fra: *Toasters toasters-bounces@teaparty.net på vegne af Rue, Randy < randyrue@gmail.com> *Dato: *mandag, 26. april 2021 kl. 15.59 *Til: *Toasters toasters@teaparty.net *Emne: *AFF-A220 9.8P1 default aggregates and root volumes
Hello,
We're setting up a new and relatively small SSD NAS and it's arrived configured for two equally sized tiers with each node and its root aggr/vol in each tier. Each tier is about 15TB in size before compression.
We're hoping to avoid needing to manage two equally sized data aggrs and moving data volumes around to balance them. For one thing, our largest data volumes are larger than 15TB and snapmirror doesn't seem to want to let us set up a relationship to load the new volumes from our old cluster, even if the target has ample room after the guaranteed 3X compression.
We're willing to lose the wasted space involved in creating one tier/partition/aggr/root volume with the minimum number of disks for raid-dp (3?) for one node if that will allow us to put the rest on the other set and have a single large container for our unstructured file volumes.
We tried moving all volumes to one tier and deleting the other. But one node is still sitting on those disks.
Our old cluster is at 9.1P6 and I'm clear that some basic concepts have changed with the introduction of partitions and whatnot. So bear with me if I'm asking n00b questions even after a few years running NetApp gear.
- Is what I've proposed above reasonable? (one minimum aggr and one
large one) Is it commonly done? Is it a good idea?
- Can you point me to any "change notes" type doc that explains these
new terms/objects to an otherwise experienced NetApp admin?
- If the above is viable, what do I need to do to get there?
For what it's worth, I've been noodling a bit with the "system node migrate-root" command (this new AFF is not in production yet) and got a warning that my target disks don't have a spare root partition (I specified some of the disks on the "old" aggr). That warning says I can find available disks with the command "storage disk partition show -owner-node-name redacted-a -container-type spare -is-root true" but the CLI then complains that partition is not a command (I'm at "advanced" privilege level). Is the given command correct?
Hope to hear from you,
Randy in Seattle _______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters