If you like rsync...check out XDP on the NetApp support site.

--tmac

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam

I Blog at TMACsRack




On Mon, Apr 26, 2021 at 1:06 PM Randy Rue <randyrue@gmail.com> wrote:

And our source FAS is at 9.1. And It's no longer on NA support (we have HW support only from a 3rd party), so no upgrades.


I'm liking rsync more and more




On 4/26/2021 9:20 AM, Jeff Bryer wrote:

Your source and destination both have to be a FlexGroup for Snapmirror.

I've asked a long time ago to not have that requirement.


Plus you have to match geometry (so same number of constituent volumes)




From: Toasters <toasters-bounces@teaparty.net> on behalf of Randy Rue <randyrue@gmail.com>
Sent: Monday, April 26, 2021 9:14 AM
To: Heino Walther; tmac
Cc: Toasters
Subject: Re: SV: AFF-A220 9.8P1 default aggregates and root volumes
 

We're not serving any LUNs and have no plans to.

For all my weird-ass questions I'm actually a big fan of sticking with defaults unless there's a true need to "go snowflake" and our application is not exotic, pretty much just a plain old NFS NAS that serves up files and Xen datastores.

Testing now whether I can create a flexgroup as a DP type and make it a snapmirror target.

On 4/26/2021 7:58 AM, Heino Walther wrote:

I think we agree that FlexGroups are the way forward, but one will have to take care and verify that stuff like SnapMirror/Vault still works (if this is used)… also I think FlexGroups does not support LUNS ?

 

Anyway you are correct about the partitions… there are several ways to change the configuration…  I have had many more or less hacky setups, so normally it is possible to go out of the “box” NetApp has set as default.

Example is the possibility to create a root aggregate with a RAID4…  and a way more hacky and “internal only” way is to create your own partition sizes which involves non-documented commands…. But it’s possible..

But I have had issues with these hacky setups… for example, it is possible to force to partitions from the same disk into the same RAID group… it’s not clever, but possible non the less 😉.  But this will cause issues if you were to play around with “disk replace” later on… I had an issue with this, and NetApp support was required in order to fix this… basically the disk replace commands hang, waiting for something…  which causes all rebuild tasks etc. stall…. I think this was a bug in ONTAP, but I bet they are not making much effort to fix it, because if you choose to “hack” this much, you are basically on your own..  but as always NetApp support is always ready to help, no questions asked 😊

 

One other remark… I really hope that NetApp is moving their root aggregate onto local flash cards in the future, so that we can get rid of these partitions 😊. Or if we can get back to the good old days with root volumes “vol0” (which is no likely to happen).

 

/Heino

 

Fra: tmac <tmacmd@gmail.com>
Dato: mandag, 26. april 2021 kl. 16.46
Til: Heino Walther <hw@beardmann.dk>
Cc: Rue, Randy <randyrue@gmail.com>, Toasters <toasters@teaparty.net>
Emne: Re: AFF-A220 9.8P1 default aggregates and root volumes

Using "whole" or non-partitioned drives will use a minimum of 6 full drives (at least to start, P+P+D with RAID-DP and drive size <10TB).

If the drive size is 10T or larger, RAID-TEC is used instead.

 

Not-knowing the size of the drives/SSDs in question make this a little difficult to answer.

It is trivial to get around the whole-drive/partitioned drive when adding in the future.

Set the "maxraidsize" to whatever is in the raidgroup now. any future drives are always added as whole drives unless they are pre-partitioned.


--tmac

 

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam

I Blog at TMACsRack

 

 

 

On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk> wrote:

Hi Randy

 

There are two ways you can go… the correct way, and the “hacky” way…

When you boot up the system with the serial or Service-Processor attached you can press “Ctrl-C” to get into the Maintaince menu from where you can Zero all disks on the system.

This will build two root aggregates, one for each controller.  They are typically build from 4 disks that are partitioned as “Root-Data-Data” disks, that is a small Root partition and two equal sized Data partitions.

Disk sizes of these may vary dependent on your disks and your controller model. (I think more disks are used… maybe 8…)

All other disks are not touched, and are therefore spares, and the 2 x 4 data partitions are also presented as spare “disks” half the size of the physical disk (minus the small root partition).

Each controller require a root aggregate, no matter what.

If you would like to have just one data aggregate on one of the controllers, you can do so, but be aware that if you start your aggregate with the partitioned disks, and add non-partitioned disks later on, the disks will most likely be partitioned by default, and the other partition is assigned to the partner controller.

The way to get around this, is to either not use the partitioned disks, and just start your new aggregate with unpartitioned disks, which you will have to assign to one of the controllers.

If you would like to use the partitioned disks, you can create a new raidgroup in the same aggregate using the partitions from one of systems….

You will then have the partitioned that are assigned to the passive node…  (this is where it gets hacky)…

You are infact able to assign these partitioned to the same node, and you are able to add them to the same RAID group as the other partitions… so a RAID group consisting of partitions of the same disk 😊

If one disk fails you will have two partitions fail inside your RAID group…  which is a bit scary for me… so I would suggest to create a separate RAID group for them…

 

So: Example.. system with 24 disks… lets say the disks are 10TB large…

Ctrl-A:

RootAggr: Consists of 4 partitions (10G each)

Data Aggr: Consists of:

RAID Group 0: 19 x Physical disks (RAID-DP) 170TB

RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)

RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)

Ctrl-B:

RootAggr: Consists of 4 partitions (10G each)

 

I hope this makes sense… keep in mind that this example is not how it is going to look, as there will be more partitioned disks, because as a minimum the system needs 4 partitions for the root aggregate… so is will more likely be 8 disks that are partitioned…

(I just didn’t want to correct the numbers above)

 

Now… that being said… there are ways to limit the number of partitions or disks used for the root aggregate…. Once you are up and running per default, you can create two new root aggregates that are either smaller in size, or using RAID4 instead of RAID-DP… of cause with the increased risk as a disk dies…. (the information on the root aggregate can and should be backed up, and is pretty easy to restore if they should fail). Your data aggregates should be no less than RAID-DP.

The way to create smaller root aggregates and to create your own partition sizes includes fairly hairy commands which require “priv set diag”…  so unless you know what you are doing, I would suggest against it. (but it is possible)

 

Basically I would suggest the default with a two node half and half setup… maybe you can use some other features in order to spread out your load on both aggregates?   I am pretty sure that if you are just using CIFS or NFS, you should be able to “merge” two volumes (one from each controller) into one logical workspace…  But since I have not been working with this that much, I would let someone else explain this part… (I’m pretty sure you can set this up from the GUI even…)

 

/Heino

 

 

 

 

Fra: Toasters <toasters-bounces@teaparty.net> på vegne af Rue, Randy <randyrue@gmail.com>
Dato: mandag, 26. april 2021 kl. 15.59
Til: Toasters <toasters@teaparty.net>
Emne: AFF-A220 9.8P1 default aggregates and root volumes

Hello,

We're setting up a new and relatively small SSD NAS and it's arrived configured for two equally sized tiers with each node and its root aggr/vol in each tier. Each tier is about 15TB in size before compression.

We're hoping to avoid needing to manage two equally sized data aggrs and moving data volumes around to balance them. For one thing, our largest data volumes are larger than 15TB and snapmirror doesn't seem to want to let us set up a relationship to load the new volumes from our old cluster, even if the target has ample room after the guaranteed 3X compression.

We're willing to lose the wasted space involved in creating one tier/partition/aggr/root volume with the minimum number of disks for raid-dp (3?) for one node if that will allow us to put the rest on the other set and have a single large container for our unstructured file volumes.

We tried moving all volumes to one tier and deleting the other. But one node is still sitting on those disks.

Our old cluster is at 9.1P6 and I'm clear that some basic concepts have changed with the introduction of partitions and whatnot. So bear with me if I'm asking n00b questions even after a few years running NetApp gear.

  • Is what I've proposed above reasonable? (one minimum aggr and one large one) Is it commonly done? Is it a good idea?
  • Can you point me to any "change notes" type doc that explains these new terms/objects to an otherwise experienced NetApp admin?
  • If the above is viable, what do I need to do to get there?

For what it's worth, I've been noodling a bit with the "system node migrate-root" command  (this new AFF is not in production yet) and got a warning that my target disks don't have a spare root partition (I specified some of the disks on the "old" aggr). That warning says I can find available disks with the command "storage disk partition show -owner-node-name redacted-a -container-type spare -is-root true" but the CLI then complains that partition is not a command (I'm at "advanced" privilege level). Is the given command correct?

Hope to hear from you,

 

Randy in Seattle

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters