OK, I think I found it out myself now after fiddling around with that for some time.

The misleading part was the syntax for specifying partitions directly – as you can see in this output below, partitions are referenced as 0b.00.1P2 f.ex. and when you use „storage disk partition show“, they’re referenced as 1.0.1.P2.

Using the syntax from „storage disk partition show“, I was then able to create a new root aggregate on the root slices with this command:

 

CLUSTER::*> storage aggregate create -aggregate newroot -node node02 -partitionlist 1.0.14.P2,1.0.15.P2,1.0.16.P2,1.0.17.P2,1.0.18.P2,1.0.19.P2,1.0.20.P2,1.0.21.P2 -raidtype raid_tec

 

Then I followed the manual steps in this KB:

https://kb.netapp.com/support/s/article/ka31A0000008gyeQAA/How-to-non-disruptively-create-a-new-root-aggregate-and-have-it-host-the-root-volume-in-clustered-Data-ONTAP-8-2-and-8-3-ONTAP-9-0?language=en_US

 

And I’m now up and running on my newly created root aggr on the small root slices.

The question remains, why node02 didn’t partition the disks automatically during the wipeconfig request as node01 did, although I followed the same procedure on them – and I’m trying to find out how to manually partition those three ex-root-aggr drives now J

 

Best,

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: AGriesser@anexia-it.com

Web: http://www.anexia-it.com

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

 

Von: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Alexander Griesser
Gesendet: Mittwoch, 21. Juni 2017 19:20
An: toasters@teaparty.net
Betreff: Migrate root volume to partitioned disks after wipeconfig

 

Hey there,

 

I’ve got a FAS2554 here which I’ve pushed through wipeconfig recently and interestingly, both controllers show different layouts after this process.

Node 1 show the following root aggregate:

 

     RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------

      tparity   0b.00.1P2       0b    0   1   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      dparity   0b.00.3P2       0b    0   3   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      parity    0b.00.5P2       0b    0   5   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      data      0a.00.6P2       0a    0   6   SA:A   0  FSAS  7200 55176/113000448   55184/113016832

      data      0b.00.7P2       0b    0   7   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      data      0a.00.8P2       0a    0   8   SA:A   0  FSAS  7200 55176/113000448   55184/113016832

      data      0b.00.9P2       0b    0   9   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      data      0a.00.10P2      0a    0   10  SA:A   0  FSAS  7200 55176/113000448   55184/113016832

      data      0b.00.11P2      0b    0   11  SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      data      0a.00.12P2      0a    0   12  SA:A   0  FSAS  7200 55176/113000448   55184/113016832

      data      0b.00.13P2      0b    0   13  SA:B   0  FSAS  7200 55176/113000448   55184/113016832

 

Node 2 does not have partitioned disks for root:

 

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------

      dparity   0a.00.0         0a    0   0   SA:B   0  FSAS  7200 5614621/11498743808 5625872/11521787400

      parity    0a.00.2         0a    0   2   SA:B   0  FSAS  7200 5614621/11498743808 5625872/11521787400

      data      0a.00.4         0a    0   4   SA:B   0  FSAS  7200 5614621/11498743808 5625872/11521787400

 

Also, Node1 has raid_tec on the root aggregate, but that’s OK I guess.

So what I’d like to do now is to migrate the root aggregate on node2 to an aggregate with partitioned disks too, of course, but I cannot find any information on how to do that.

I’ve tried to manually create an aggregate on node 2 and have manually assigned the root partitions to it, but it’s always telling me that the disks are not owned by node 2 or the new migrate-root command just does not seem to support partitioned disks.

 

Trying to avoid another wipeconfig here, since that will take ages again for zeroing all the disks…

There’s no data on the filer as of yet, so we’re good to try with whatever someone can come up with J

 

Thanks,

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: AGriesser@anexia-it.com

Web: http://www.anexia-it.com

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601