Hello,

We're setting up a new and relatively small SSD NAS and it's arrived configured for two equally sized tiers with each node and its root aggr/vol in each tier. Each tier is about 15TB in size before compression.

We're hoping to avoid needing to manage two equally sized data aggrs and moving data volumes around to balance them. For one thing, our largest data volumes are larger than 15TB and snapmirror doesn't seem to want to let us set up a relationship to load the new volumes from our old cluster, even if the target has ample room after the guaranteed 3X compression.

We're willing to lose the wasted space involved in creating one tier/partition/aggr/root volume with the minimum number of disks for raid-dp (3?) for one node if that will allow us to put the rest on the other set and have a single large container for our unstructured file volumes.

We tried moving all volumes to one tier and deleting the other. But one node is still sitting on those disks.

Our old cluster is at 9.1P6 and I'm clear that some basic concepts have changed with the introduction of partitions and whatnot. So bear with me if I'm asking n00b questions even after a few years running NetApp gear.

For what it's worth, I've been noodling a bit with the "system node migrate-root" command  (this new AFF is not in production yet) and got a warning that my target disks don't have a spare root partition (I specified some of the disks on the "old" aggr). That warning says I can find available disks with the command "storage disk partition show -owner-node-name redacted-a -container-type spare -is-root true" but the CLI then complains that partition is not a command (I'm at "advanced" privilege level). Is the given command correct?

Hope to hear from you,


Randy in Seattle