Hey there,
I've got a FAS2554 here which I've pushed through wipeconfig recently and interestingly, both controllers show different layouts after this process. Node 1 show the following root aggregate:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- tparity 0b.00.1P2 0b 0 1 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 dparity 0b.00.3P2 0b 0 3 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 parity 0b.00.5P2 0b 0 5 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.6P2 0a 0 6 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.7P2 0b 0 7 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.8P2 0a 0 8 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.9P2 0b 0 9 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.10P2 0a 0 10 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.11P2 0b 0 11 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.12P2 0a 0 12 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.13P2 0b 0 13 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
Node 2 does not have partitioned disks for root:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 0a.00.0 0a 0 0 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 parity 0a.00.2 0a 0 2 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 data 0a.00.4 0a 0 4 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
Also, Node1 has raid_tec on the root aggregate, but that's OK I guess. So what I'd like to do now is to migrate the root aggregate on node2 to an aggregate with partitioned disks too, of course, but I cannot find any information on how to do that. I've tried to manually create an aggregate on node 2 and have manually assigned the root partitions to it, but it's always telling me that the disks are not owned by node 2 or the new migrate-root command just does not seem to support partitioned disks.
Trying to avoid another wipeconfig here, since that will take ages again for zeroing all the disks... There's no data on the filer as of yet, so we're good to try with whatever someone can come up with :)
Thanks,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
OK, I think I found it out myself now after fiddling around with that for some time. The misleading part was the syntax for specifying partitions directly - as you can see in this output below, partitions are referenced as 0b.00.1P2 f.ex. and when you use "storage disk partition show", they're referenced as 1.0.1.P2. Using the syntax from "storage disk partition show", I was then able to create a new root aggregate on the root slices with this command:
CLUSTER::*> storage aggregate create -aggregate newroot -node node02 -partitionlist 1.0.14.P2,1.0.15.P2,1.0.16.P2,1.0.17.P2,1.0.18.P2,1.0.19.P2,1.0.20.P2,1.0.21.P2 -raidtype raid_tec
Then I followed the manual steps in this KB: https://kb.netapp.com/support/s/article/ka31A0000008gyeQAA/How-to-non-disrup...
And I'm now up and running on my newly created root aggr on the small root slices. The question remains, why node02 didn't partition the disks automatically during the wipeconfig request as node01 did, although I followed the same procedure on them - and I'm trying to find out how to manually partition those three ex-root-aggr drives now :)
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Alexander Griesser Gesendet: Mittwoch, 21. Juni 2017 19:20 An: toasters@teaparty.net Betreff: Migrate root volume to partitioned disks after wipeconfig
Hey there,
I've got a FAS2554 here which I've pushed through wipeconfig recently and interestingly, both controllers show different layouts after this process. Node 1 show the following root aggregate:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- tparity 0b.00.1P2 0b 0 1 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 dparity 0b.00.3P2 0b 0 3 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 parity 0b.00.5P2 0b 0 5 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.6P2 0a 0 6 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.7P2 0b 0 7 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.8P2 0a 0 8 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.9P2 0b 0 9 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.10P2 0a 0 10 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.11P2 0b 0 11 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.12P2 0a 0 12 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.13P2 0b 0 13 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
Node 2 does not have partitioned disks for root:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 0a.00.0 0a 0 0 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 parity 0a.00.2 0a 0 2 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 data 0a.00.4 0a 0 4 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
Also, Node1 has raid_tec on the root aggregate, but that's OK I guess. So what I'd like to do now is to migrate the root aggregate on node2 to an aggregate with partitioned disks too, of course, but I cannot find any information on how to do that. I've tried to manually create an aggregate on node 2 and have manually assigned the root partitions to it, but it's always telling me that the disks are not owned by node 2 or the new migrate-root command just does not seem to support partitioned disks.
Trying to avoid another wipeconfig here, since that will take ages again for zeroing all the disks... There's no data on the filer as of yet, so we're good to try with whatever someone can come up with :)
Thanks,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Hi Alexander,
you could check on node02 if any of following env-variables are set to "false" in the loader (yes, with the question mark at the end):
· Root-Data-Partitioning for HDDs: root-uses-shared-disks?
· Root-Data-Partitioning for SSDs (also for AFF): root-uses-shared-ssds?
· Root-Data-Data-Partitioning with ONTAP 9: allow-root-data1-data2-partitions?
· Partitioning for StoragePools: allow-ssd-partitions? If you set them to true with "setenv" don't forget the "saveenv" afterwards.
Which version of ONTAP are you on? 9.0 or 9.1? Since there has been a bugfix with 9.1P2 concerning unpartitioning disks: http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=1057401 (9.1P5 is out btw.)
Unpartitioning partitioned disks is certainly possibly after initial system-initialization (you could actually just wipe the disk labels), but the other way around - I'm not sure... The easiest solution would actually be to reinitialize the system with the famous 4a. (Boot-Menu --> 4) If the above env-vars are set to true the system should get initialized "correctly". Usually with partition 2 of disks 0,2,4,6,... for the root_aggr of node1 and partition 2 of disks 1,3,5,7,... for the root_aggr of node2.
Best regards Oliver
/NEUE NACHRICHT
[cid:image002.png@01D2EAEC.BCCF7660]
Oliver Gill Junior System Engineer oliver.gill@au.demailto:oliver.gill@au.de
Von: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Alexander Griesser Gesendet: Mittwoch, 21. Juni 2017 19:45 An: Alexander Griesser; toasters@teaparty.net Betreff: AW: Migrate root volume to partitioned disks after wipeconfig
OK, I think I found it out myself now after fiddling around with that for some time. The misleading part was the syntax for specifying partitions directly - as you can see in this output below, partitions are referenced as 0b.00.1P2 f.ex. and when you use "storage disk partition show", they're referenced as 1.0.1.P2. Using the syntax from "storage disk partition show", I was then able to create a new root aggregate on the root slices with this command:
CLUSTER::*> storage aggregate create -aggregate newroot -node node02 -partitionlist 1.0.14.P2,1.0.15.P2,1.0.16.P2,1.0.17.P2,1.0.18.P2,1.0.19.P2,1.0.20.P2,1.0.21.P2 -raidtype raid_tec
Then I followed the manual steps in this KB: https://kb.netapp.com/support/s/article/ka31A0000008gyeQAA/How-to-non-disrup...
And I'm now up and running on my newly created root aggr on the small root slices. The question remains, why node02 didn't partition the disks automatically during the wipeconfig request as node01 did, although I followed the same procedure on them - and I'm trying to find out how to manually partition those three ex-root-aggr drives now :)
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Alexander Griesser Gesendet: Mittwoch, 21. Juni 2017 19:20 An: toasters@teaparty.netmailto:toasters@teaparty.net Betreff: Migrate root volume to partitioned disks after wipeconfig
Hey there,
I've got a FAS2554 here which I've pushed through wipeconfig recently and interestingly, both controllers show different layouts after this process. Node 1 show the following root aggregate:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- tparity 0b.00.1P2 0b 0 1 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 dparity 0b.00.3P2 0b 0 3 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 parity 0b.00.5P2 0b 0 5 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.6P2 0a 0 6 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.7P2 0b 0 7 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.8P2 0a 0 8 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.9P2 0b 0 9 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.10P2 0a 0 10 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.11P2 0b 0 11 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.12P2 0a 0 12 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.13P2 0b 0 13 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
Node 2 does not have partitioned disks for root:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 0a.00.0 0a 0 0 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 parity 0a.00.2 0a 0 2 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 data 0a.00.4 0a 0 4 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
Also, Node1 has raid_tec on the root aggregate, but that's OK I guess. So what I'd like to do now is to migrate the root aggregate on node2 to an aggregate with partitioned disks too, of course, but I cannot find any information on how to do that. I've tried to manually create an aggregate on node 2 and have manually assigned the root partitions to it, but it's always telling me that the disks are not owned by node 2 or the new migrate-root command just does not seem to support partitioned disks.
Trying to avoid another wipeconfig here, since that will take ages again for zeroing all the disks... There's no data on the filer as of yet, so we're good to try with whatever someone can come up with :)
Thanks,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Advanced UniByte GmbH - Paul-Lechler-Straße 8 - 72555 Metzingen - Tel: 07123/9542-0 - Fax: 07123/9542-3-100 - info@au.demailto:info@au.de HRB 352782, Amtsgericht Stuttgart - Geschaeftsfuehrer: Sandro Walker - Sitz: Metzingen - www.au.dehttp://www.au.de
Diese E-Mail ist nur fuer den Empfaenger bestimmt und kann vertrauliche oder rechtlich geschuetzte Informationen enthalten, deren Kopieren, Weitergabe an Dritte oder sonstige Verwendung untersagt ist. Wenn Sie nicht der richtige Empfaenger sind, unterrichten Sie uns bitte umgehend telefonisch oder per E-Mail und loeschen Sie diese E-Mail. Vielen Dank!
Hey,
negative, they're all undefined:
LOADER-B> printenv root-uses-shared-disks?
Variable Name Value -------------------- -------------------------------------------------- root-uses-shared-disks? *** Undefined *** LOADER-B> printenv root-uses-shared-ssds?
Variable Name Value -------------------- -------------------------------------------------- root-uses-shared-ssds? *** Undefined *** LOADER-B> printenv allow-root-data1-data2-partitions?
Variable Name Value -------------------- -------------------------------------------------- allow-root-data1-data2-partitions? *** Undefined *** LOADER-B> printenv allow-ssd-partitions?
Variable Name Value -------------------- -------------------------------------------------- allow-ssd-partitions? *** Undefined ***
I've set them up with 9.1P5, the workflow I was following was to halt the 7-mode installation, set the environment variable to boot clustered, boot into the boot menu, install new software first (9.1P5 in this case), reboot into boot menu again and then wipeconfig. Loader A and Loader B where showing the same environments (I did also run set-defaults first) and that's why I'm confused about why A did it right and B failed miserably. Anyways, as someone else already suggested, once you try to zero the now-free (after the root volume move) unpartitioned disks, they do automatically get partitioned so I'm all good and up and running now with the desired target configuration.
Thanks for listening :)
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: Oliver Gill [mailto:oliver.gill@au.de] Gesendet: Donnerstag, 22. Juni 2017 00:36 An: Alexander Griesser AGriesser@anexia-it.com; toasters@teaparty.net Betreff: AW: Migrate root volume to partitioned disks after wipeconfig
Hi Alexander,
you could check on node02 if any of following env-variables are set to "false" in the loader (yes, with the question mark at the end):
· Root-Data-Partitioning for HDDs: root-uses-shared-disks?
· Root-Data-Partitioning for SSDs (also for AFF): root-uses-shared-ssds?
· Root-Data-Data-Partitioning with ONTAP 9: allow-root-data1-data2-partitions?
· Partitioning for StoragePools: allow-ssd-partitions? If you set them to true with "setenv" don't forget the "saveenv" afterwards.
Which version of ONTAP are you on? 9.0 or 9.1? Since there has been a bugfix with 9.1P2 concerning unpartitioning disks: http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=1057401 (9.1P5 is out btw.)
Unpartitioning partitioned disks is certainly possibly after initial system-initialization (you could actually just wipe the disk labels), but the other way around - I'm not sure... The easiest solution would actually be to reinitialize the system with the famous 4a. (Boot-Menu --> 4) If the above env-vars are set to true the system should get initialized "correctly". Usually with partition 2 of disks 0,2,4,6,... for the root_aggr of node1 and partition 2 of disks 1,3,5,7,... for the root_aggr of node2.
Best regards Oliver
/NEUE NACHRICHT
[cid:image002.png@01D2EC14.5708C7E0]
Oliver Gill Junior System Engineer oliver.gill@au.demailto:oliver.gill@au.de
Von: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Alexander Griesser Gesendet: Mittwoch, 21. Juni 2017 19:45 An: Alexander Griesser; toasters@teaparty.netmailto:toasters@teaparty.net Betreff: AW: Migrate root volume to partitioned disks after wipeconfig
OK, I think I found it out myself now after fiddling around with that for some time. The misleading part was the syntax for specifying partitions directly - as you can see in this output below, partitions are referenced as 0b.00.1P2 f.ex. and when you use "storage disk partition show", they're referenced as 1.0.1.P2. Using the syntax from "storage disk partition show", I was then able to create a new root aggregate on the root slices with this command:
CLUSTER::*> storage aggregate create -aggregate newroot -node node02 -partitionlist 1.0.14.P2,1.0.15.P2,1.0.16.P2,1.0.17.P2,1.0.18.P2,1.0.19.P2,1.0.20.P2,1.0.21.P2 -raidtype raid_tec
Then I followed the manual steps in this KB: https://kb.netapp.com/support/s/article/ka31A0000008gyeQAA/How-to-non-disrup...
And I'm now up and running on my newly created root aggr on the small root slices. The question remains, why node02 didn't partition the disks automatically during the wipeconfig request as node01 did, although I followed the same procedure on them - and I'm trying to find out how to manually partition those three ex-root-aggr drives now :)
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Alexander Griesser Gesendet: Mittwoch, 21. Juni 2017 19:20 An: toasters@teaparty.netmailto:toasters@teaparty.net Betreff: Migrate root volume to partitioned disks after wipeconfig
Hey there,
I've got a FAS2554 here which I've pushed through wipeconfig recently and interestingly, both controllers show different layouts after this process. Node 1 show the following root aggregate:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- tparity 0b.00.1P2 0b 0 1 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 dparity 0b.00.3P2 0b 0 3 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 parity 0b.00.5P2 0b 0 5 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.6P2 0a 0 6 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.7P2 0b 0 7 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.8P2 0a 0 8 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.9P2 0b 0 9 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.10P2 0a 0 10 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.11P2 0b 0 11 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.12P2 0a 0 12 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.13P2 0b 0 13 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
Node 2 does not have partitioned disks for root:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 0a.00.0 0a 0 0 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 parity 0a.00.2 0a 0 2 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 data 0a.00.4 0a 0 4 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
Also, Node1 has raid_tec on the root aggregate, but that's OK I guess. So what I'd like to do now is to migrate the root aggregate on node2 to an aggregate with partitioned disks too, of course, but I cannot find any information on how to do that. I've tried to manually create an aggregate on node 2 and have manually assigned the root partitions to it, but it's always telling me that the disks are not owned by node 2 or the new migrate-root command just does not seem to support partitioned disks.
Trying to avoid another wipeconfig here, since that will take ages again for zeroing all the disks... There's no data on the filer as of yet, so we're good to try with whatever someone can come up with :)
Thanks,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Advanced UniByte GmbH - Paul-Lechler-Straße 8 - 72555 Metzingen - Tel: 07123/9542-0 - Fax: 07123/9542-3-100 - info@au.demailto:info@au.de HRB 352782, Amtsgericht Stuttgart - Geschaeftsfuehrer: Sandro Walker - Sitz: Metzingen - www.au.dehttp://www.au.de
Diese E-Mail ist nur fuer den Empfaenger bestimmt und kann vertrauliche oder rechtlich geschuetzte Informationen enthalten, deren Kopieren, Weitergabe an Dritte oder sonstige Verwendung untersagt ist. Wenn Sie nicht der richtige Empfaenger sind, unterrichten Sie uns bitte umgehend telefonisch oder per E-Mail und loeschen Sie diese E-Mail. Vielen Dank!
You may want to check loader shared-disks? variable on both nodes (see https://kb.netapp.com/support/s/article/ka31A00000013hAQAQ/how-to-setup-adva...) if there are any differences.
As for disk partitioning of new disks – it should happen automatically when you use them (https://kb.netapp.com/support/s/article/ka21A0000000iUmQAI/why-is-a-newly-ad...).
--- With best regards
Andrei Borzenkov Senior system engineer FTS WEMEAI RUC RU SC TMS FOS [cid:image001.gif@01CBF835.B3FEDA90] FUJITSU Zemlyanoy Val Street, 9, 105 064 Moscow, Russian Federation Tel.: +7 495 730 62 20 ( reception) Mob.: +7 916 678 7208 Fax: +7 495 730 62 14 E-mail: Andrei.Borzenkov@ts.fujitsu.commailto:Andrei.Borzenkov@ts.fujitsu.com Web: ru.fujitsu.comhttp://ts.fujitsu.com/ Company details: ts.fujitsu.com/imprinthttp://ts.fujitsu.com/imprint.html This communication contains information that is confidential, proprietary in nature and/or privileged. It is for the exclusive use of the intended recipient(s). If you are not the intended recipient(s) or the person responsible for delivering it to the intended recipient(s), please note that any form of dissemination, distribution or copying of this communication is strictly prohibited and may be unlawful. If you have received this communication in error, please immediately notify the sender and delete the original communication. Thank you for your cooperation. Please be advised that neither Fujitsu, its affiliates, its employees or agents accept liability for any errors, omissions or damages caused by delays of receipt or by any virus infection in this message or its attachments, or which may otherwise arise as a result of this e-mail transmission.
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Alexander Griesser Sent: Wednesday, June 21, 2017 8:45 PM To: Alexander Griesser; toasters@teaparty.net Subject: AW: Migrate root volume to partitioned disks after wipeconfig
OK, I think I found it out myself now after fiddling around with that for some time. The misleading part was the syntax for specifying partitions directly – as you can see in this output below, partitions are referenced as 0b.00.1P2 f.ex. and when you use „storage disk partition show“, they’re referenced as 1.0.1.P2. Using the syntax from „storage disk partition show“, I was then able to create a new root aggregate on the root slices with this command:
CLUSTER::*> storage aggregate create -aggregate newroot -node node02 -partitionlist 1.0.14.P2,1.0.15.P2,1.0.16.P2,1.0.17.P2,1.0.18.P2,1.0.19.P2,1.0.20.P2,1.0.21.P2 -raidtype raid_tec
Then I followed the manual steps in this KB: https://kb.netapp.com/support/s/article/ka31A0000008gyeQAA/How-to-non-disrup...
And I’m now up and running on my newly created root aggr on the small root slices. The question remains, why node02 didn’t partition the disks automatically during the wipeconfig request as node01 did, although I followed the same procedure on them – and I’m trying to find out how to manually partition those three ex-root-aggr drives now :)
Best,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Von: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Alexander Griesser Gesendet: Mittwoch, 21. Juni 2017 19:20 An: toasters@teaparty.netmailto:toasters@teaparty.net Betreff: Migrate root volume to partitioned disks after wipeconfig
Hey there,
I’ve got a FAS2554 here which I’ve pushed through wipeconfig recently and interestingly, both controllers show different layouts after this process. Node 1 show the following root aggregate:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- tparity 0b.00.1P2 0b 0 1 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 dparity 0b.00.3P2 0b 0 3 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 parity 0b.00.5P2 0b 0 5 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.6P2 0a 0 6 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.7P2 0b 0 7 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.8P2 0a 0 8 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.9P2 0b 0 9 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.10P2 0a 0 10 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.11P2 0b 0 11 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 data 0a.00.12P2 0a 0 12 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 data 0b.00.13P2 0b 0 13 SA:B 0 FSAS 7200 55176/113000448 55184/113016832
Node 2 does not have partitioned disks for root:
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 0a.00.0 0a 0 0 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 parity 0a.00.2 0a 0 2 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 data 0a.00.4 0a 0 4 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
Also, Node1 has raid_tec on the root aggregate, but that’s OK I guess. So what I’d like to do now is to migrate the root aggregate on node2 to an aggregate with partitioned disks too, of course, but I cannot find any information on how to do that. I’ve tried to manually create an aggregate on node 2 and have manually assigned the root partitions to it, but it’s always telling me that the disks are not owned by node 2 or the new migrate-root command just does not seem to support partitioned disks.
Trying to avoid another wipeconfig here, since that will take ages again for zeroing all the disks… There’s no data on the filer as of yet, so we’re good to try with whatever someone can come up with :)
Thanks,
Alexander Griesser Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser@anexia-it.commailto:AGriesser@anexia-it.com Web: http://www.anexia-it.comhttp://www.anexia-it.com/
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt Geschäftsführer: Alexander Windbichler Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601