I'm really glad I asked, and that you answered!
This is indeed an AFF and matches what you describe below.
Sounds like FlexGroups is a perfect example of the kind of new architecture I needed to know about, allowing us to stick with the default disk setup and still have the volumes we need.
Note we're only using snapmirror to populate the new AFF. If the
old cluster had a newer version on it, I could set the targets as
caching volumes instead of DP volumes, but we're trying to retire
the old array ASAP.
I'll re-create that tier and start reading up on FlexGroups.
Will holler here if I hit any more bumps.
Thanks!
If it is an all-SSD unit (affectionately known as an AFF or All Flash FAS) then what you likely have is something called root-data-data partitioning
There are 3 partitions:P1 -> node 1P2 -> node 2P3 -> root
What happens (in most cases) half of the P3 partitions are given to each node. A minimal root aggregate is created leaving at least 1 but in most cases 2 spare root partitions.All the P1 partitions end up belonging to node 1 and all the P2 partitions belong to node 2. This balances performance and capacity.
Should you want to break that, you can use the "disk option" command to turn auto-assign off for both nodes.Destroy your two data aggregates, then remove ownership of all "data 2" partitions:
set advanceddisk removeowner -data2 true -disk x.y.x(repeat for all disks in the system)
Then change the ownership:disk assign -data2 true -node node1 -disk x.y.z(repeat for all disks in the system)
You could then try auto-provisioning:aggregate auto-provision -node node1 -verbose true(if you don't like the raid layout, you can manually create your aggregate)
I suspect by default, if the partition size is >6TB (or maybe 8TB), the system will automatically use RAID-TEC (triple parity RAID)Otherwise, the system will use RAID-DP and limit raid-group size.
With that said, why? With newer versions of ONTAP, you could take advantage of FlexGroups (a volume that spans one or more controllers and one or more aggregates)
Are you able to update the source to ONTAP 9.3 (take advantage of XDP) which might let you squeeze that data in?
personally, seems like a FAS unit with SATA drives would be a better fit for a large snapmirror destination
Summary:
- Is what I've proposed above reasonable? (one minimum aggr and one large one) Is it commonly done? Is it a good idea?
- It can be done. I personally avoid it whenever possible. Only implement for edge cases
- Think of it this way: you have two controllers in active/active and you are forcing it to basically be active/passive
- You are unable to take full advantage of the second controller!
- Can you point me to any "change notes" type doc that explains these new terms/objects to an otherwise experienced NetApp admin?
- https://library.netapp.com/ecmdocs/ECMLP2492508/html/frameset.html (release notes for ONTAP, 9.0 through 9.8)
- If the above is viable, what do I need to do to get there?
- Yes (if you really want to). Clues above!
On Mon, Apr 26, 2021 at 10:02 AM Rue, Randy <randyrue@gmail.com> wrote:
_______________________________________________Hello,
We're setting up a new and relatively small SSD NAS and it's arrived configured for two equally sized tiers with each node and its root aggr/vol in each tier. Each tier is about 15TB in size before compression.
We're hoping to avoid needing to manage two equally sized data aggrs and moving data volumes around to balance them. For one thing, our largest data volumes are larger than 15TB and snapmirror doesn't seem to want to let us set up a relationship to load the new volumes from our old cluster, even if the target has ample room after the guaranteed 3X compression.
We're willing to lose the wasted space involved in creating one tier/partition/aggr/root volume with the minimum number of disks for raid-dp (3?) for one node if that will allow us to put the rest on the other set and have a single large container for our unstructured file volumes.
We tried moving all volumes to one tier and deleting the other. But one node is still sitting on those disks.
Our old cluster is at 9.1P6 and I'm clear that some basic concepts have changed with the introduction of partitions and whatnot. So bear with me if I'm asking n00b questions even after a few years running NetApp gear.
- Is what I've proposed above reasonable? (one minimum aggr and one large one) Is it commonly done? Is it a good idea?
- Can you point me to any "change notes" type doc that explains these new terms/objects to an otherwise experienced NetApp admin?
- If the above is viable, what do I need to do to get there?
For what it's worth, I've been noodling a bit with the "system node migrate-root" command (this new AFF is not in production yet) and got a warning that my target disks don't have a spare root partition (I specified some of the disks on the "old" aggr). That warning says I can find available disks with the command "storage disk partition show -owner-node-name redacted-a -container-type spare -is-root true" but the CLI then complains that partition is not a command (I'm at "advanced" privilege level). Is the given command correct?
Hope to hear from you,
Randy in Seattle
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters