Hi Alexander,
you could check on node02 if any of following env-variables are set to “false” in the loader (yes, with the question mark at the end):
·
Root-Data-Partitioning for HDDs:
root-uses-shared-disks?
·
Root-Data-Partitioning for SSDs (also for AFF):
root-uses-shared-ssds?
·
Root-Data-Data-Partitioning with ONTAP 9:
allow-root-data1-data2-partitions?
·
Partitioning for StoragePools:
allow-ssd-partitions?
If you set them to true with “setenv” don’t forget the “saveenv” afterwards.
Which version of ONTAP are you on? 9.0 or 9.1? Since there has been a bugfix with 9.1P2 concerning unpartitioning disks:
http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=1057401
(9.1P5 is out btw.)
Unpartitioning partitioned disks is certainly possibly after initial system-initialization (you could actually just wipe the disk labels), but the
other way around – I’m not sure…
The easiest solution would actually be to reinitialize the system with the famous 4a. (Boot-Menu
à 4)
If the above env-vars are set to true the system should get initialized “correctly”. Usually with partition 2 of disks 0,2,4,6,… for the root_aggr
of node1 and partition 2 of disks 1,3,5,7,… for the root_aggr of node2.
Best regards
Oliver
|
|
|
Oliver Gill
|
|
|
|
|
|
Von: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net]
Im Auftrag von Alexander Griesser
Gesendet: Mittwoch, 21. Juni 2017 19:45
An: Alexander Griesser; toasters@teaparty.net
Betreff: AW: Migrate root volume to partitioned disks after wipeconfig
OK, I think I found it out myself now after fiddling around with that for some time.
The misleading part was the syntax for specifying partitions directly – as you can see in this output below, partitions are referenced as 0b.00.1P2 f.ex. and when you use „storage disk partition show“, they’re
referenced as 1.0.1.P2.
Using the syntax from „storage disk partition show“, I was then able to create a new root aggregate on the root slices with this command:
CLUSTER::*> storage aggregate create -aggregate newroot -node node02 -partitionlist 1.0.14.P2,1.0.15.P2,1.0.16.P2,1.0.17.P2,1.0.18.P2,1.0.19.P2,1.0.20.P2,1.0.21.P2 -raidtype raid_tec
Then I followed the manual steps in this KB:
And I’m now up and running on my newly created root aggr on the small root slices.
The question remains, why node02 didn’t partition the disks automatically during the wipeconfig request as node01 did, although I followed the same procedure on them – and I’m trying to find out how to manually
partition those three ex-root-aggr drives now J
Best,
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail:
AGriesser@anexia-it.com
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von:
toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net]
Im Auftrag von Alexander Griesser
Gesendet: Mittwoch, 21. Juni 2017 19:20
An: toasters@teaparty.net
Betreff: Migrate root volume to partitioned disks after wipeconfig
Hey there,
I’ve got a FAS2554 here which I’ve pushed through wipeconfig recently and interestingly, both controllers show different layouts after this process.
Node 1 show the following root aggregate:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
tparity 0b.00.1P2 0b 0 1 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
dparity 0b.00.3P2 0b 0 3 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
parity 0b.00.5P2 0b 0 5 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
data 0a.00.6P2 0a 0 6 SA:A 0 FSAS 7200 55176/113000448 55184/113016832
data 0b.00.7P2 0b 0 7 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
data 0a.00.8P2 0a 0 8 SA:A 0 FSAS 7200 55176/113000448 55184/113016832
data 0b.00.9P2 0b 0 9 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
data 0a.00.10P2 0a 0 10 SA:A 0 FSAS 7200 55176/113000448 55184/113016832
data 0b.00.11P2 0b 0 11 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
data 0a.00.12P2 0a 0 12 SA:A 0 FSAS 7200 55176/113000448 55184/113016832
data 0b.00.13P2 0b 0 13 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
Node 2 does not have partitioned disks for root:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.00.0 0a 0 0 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
parity 0a.00.2 0a 0 2 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
data 0a.00.4 0a 0 4 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
Also, Node1 has raid_tec on the root aggregate, but that’s OK I guess.
So what I’d like to do now is to migrate the root aggregate on node2 to an aggregate with partitioned disks too, of course, but I cannot find any information on how to do that.
I’ve tried to manually create an aggregate on node 2 and have manually assigned the root partitions to it, but it’s always telling me that the disks are not owned by node 2 or the new migrate-root command just does not seem to support partitioned
disks.
Trying to avoid another wipeconfig here, since that will take ages again for zeroing all the disks…
There’s no data on the filer as of yet, so we’re good to try with whatever someone can come up with
J
Thanks,
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail:
AGriesser@anexia-it.com
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Advanced UniByte GmbH - Paul-Lechler-Straße 8 - 72555 Metzingen - Tel: 07123/9542-0 - Fax: 07123/9542-3-100 -
info@au.de
HRB 352782, Amtsgericht Stuttgart - Geschaeftsfuehrer: Sandro Walker - Sitz: Metzingen -
www.au.de
Diese E-Mail ist nur fuer den Empfaenger bestimmt und kann vertrauliche oder rechtlich geschuetzte Informationen enthalten, deren Kopieren, Weitergabe an Dritte oder sonstige Verwendung untersagt ist. Wenn Sie nicht der richtige Empfaenger sind, unterrichten Sie uns bitte umgehend telefonisch oder per E-Mail und loeschen Sie diese E-Mail. Vielen Dank!