You may want to check loader shared-disks? variable on both nodes (see https://kb.netapp.com/support/s/article/ka31A00000013hAQAQ/how-to-setup-advanced-disk-partitioning?language=en_US) if there are any differences.

 

As for disk partitioning of new disks – it should happen automatically when you use them (https://kb.netapp.com/support/s/article/ka21A0000000iUmQAI/why-is-a-newly-added-disk-not-auto-partitioned-on-hdd-and-aff-adp-systems?language=en_US).

 

---

With best regards

 

Andrei Borzenkov

Senior system engineer

FTS WEMEAI RUC RU SC TMS FOS

cid:image001.gif@01CBF835.B3FEDA90

FUJITSU

Zemlyanoy Val Street, 9, 105 064 Moscow, Russian Federation

Tel.: +7 495 730 62 20 ( reception)

Mob.: +7 916 678 7208

Fax: +7 495 730 62 14

E-mail: Andrei.Borzenkov@ts.fujitsu.com

Web: ru.fujitsu.com

Company details: ts.fujitsu.com/imprint

This communication contains information that is confidential, proprietary in nature and/or privileged.  It is for the exclusive use of the intended recipient(s). If you are not the intended recipient(s) or the person responsible for delivering it to the intended recipient(s), please note that any form of dissemination, distribution or copying of this communication is strictly prohibited and may be unlawful. If you have received this communication in error, please immediately notify the sender and delete the original communication. Thank you for your cooperation.

Please be advised that neither Fujitsu, its affiliates, its employees or agents accept liability for any errors, omissions or damages caused by delays of receipt or by any virus infection in this message or its attachments, or which may otherwise arise as a result of this e-mail transmission.

 

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Alexander Griesser
Sent: Wednesday, June 21, 2017 8:45 PM
To: Alexander Griesser; toasters@teaparty.net
Subject: AW: Migrate root volume to partitioned disks after wipeconfig

 

OK, I think I found it out myself now after fiddling around with that for some time.

The misleading part was the syntax for specifying partitions directly – as you can see in this output below, partitions are referenced as 0b.00.1P2 f.ex. and when you use „storage disk partition show“, they’re referenced as 1.0.1.P2.

Using the syntax from „storage disk partition show“, I was then able to create a new root aggregate on the root slices with this command:

 

CLUSTER::*> storage aggregate create -aggregate newroot -node node02 -partitionlist 1.0.14.P2,1.0.15.P2,1.0.16.P2,1.0.17.P2,1.0.18.P2,1.0.19.P2,1.0.20.P2,1.0.21.P2 -raidtype raid_tec

 

Then I followed the manual steps in this KB:

https://kb.netapp.com/support/s/article/ka31A0000008gyeQAA/How-to-non-disruptively-create-a-new-root-aggregate-and-have-it-host-the-root-volume-in-clustered-Data-ONTAP-8-2-and-8-3-ONTAP-9-0?language=en_US

 

And I’m now up and running on my newly created root aggr on the small root slices.

The question remains, why node02 didn’t partition the disks automatically during the wipeconfig request as node01 did, although I followed the same procedure on them – and I’m trying to find out how to manually partition those three ex-root-aggr drives now J

 

Best,

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: AGriesser@anexia-it.com

Web: http://www.anexia-it.com

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

 

Von: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] Im Auftrag von Alexander Griesser
Gesendet: Mittwoch, 21. Juni 2017 19:20
An: toasters@teaparty.net
Betreff: Migrate root volume to partitioned disks after wipeconfig

 

Hey there,

 

I’ve got a FAS2554 here which I’ve pushed through wipeconfig recently and interestingly, both controllers show different layouts after this process.

Node 1 show the following root aggregate:

 

     RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------

      tparity   0b.00.1P2       0b    0   1   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      dparity   0b.00.3P2       0b    0   3   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      parity    0b.00.5P2       0b    0   5   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      data      0a.00.6P2       0a    0   6   SA:A   0  FSAS  7200 55176/113000448   55184/113016832

      data      0b.00.7P2       0b    0   7   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      data      0a.00.8P2       0a    0   8   SA:A   0  FSAS  7200 55176/113000448   55184/113016832

      data      0b.00.9P2       0b    0   9   SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      data      0a.00.10P2      0a    0   10  SA:A   0  FSAS  7200 55176/113000448   55184/113016832

      data      0b.00.11P2      0b    0   11  SA:B   0  FSAS  7200 55176/113000448   55184/113016832

      data      0a.00.12P2      0a    0   12  SA:A   0  FSAS  7200 55176/113000448   55184/113016832

      data      0b.00.13P2      0b    0   13  SA:B   0  FSAS  7200 55176/113000448   55184/113016832

 

Node 2 does not have partitioned disks for root:

 

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------

      dparity   0a.00.0         0a    0   0   SA:B   0  FSAS  7200 5614621/11498743808 5625872/11521787400

      parity    0a.00.2         0a    0   2   SA:B   0  FSAS  7200 5614621/11498743808 5625872/11521787400

      data      0a.00.4         0a    0   4   SA:B   0  FSAS  7200 5614621/11498743808 5625872/11521787400

 

Also, Node1 has raid_tec on the root aggregate, but that’s OK I guess.

So what I’d like to do now is to migrate the root aggregate on node2 to an aggregate with partitioned disks too, of course, but I cannot find any information on how to do that.

I’ve tried to manually create an aggregate on node 2 and have manually assigned the root partitions to it, but it’s always telling me that the disks are not owned by node 2 or the new migrate-root command just does not seem to support partitioned disks.

 

Trying to avoid another wipeconfig here, since that will take ages again for zeroing all the disks…

There’s no data on the filer as of yet, so we’re good to try with whatever someone can come up with J

 

Thanks,

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: AGriesser@anexia-it.com

Web: http://www.anexia-it.com

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601