In addition to all the other things mentioned:

YOU DO NOT (and should not) GET INTO MAINTENANCE MODE while a takeover is taking place!
Really bad idea! (unlocking mailbox disks and all that)

INSTEAD: While in Takeover Mode (on the node that has taken over) do a disk _re_assign.
This will reassign *all* disks belonging _to the partner_ to a new sysid.

This should work non-disruptively.



As for your previous aggregate-reassign: How did you do it?
I sure hope you

Anything else and I'm not surprised it panicked...
Reassigning one-by-one (on online aggregates) leads to broken raid-groups and panics from that.


Hope that helped


Sebastian

On 31.01.2013 02:58, Jeff Cleverley wrote:
Greetings,

I'm thinking about doing something that is not supported and was
wondering if anyone had done the same or has more detailed insight.

We have a very busy cluster (6040s 7.3.5.1P4).  It looks like we are
largely maxing out the heads for CPU.  We are getting a pair of 6080s
and really need to try and do the head swap live (takeover / giveback)
if at all possible.  The unsupported part I want to do is keep the
6040 NVRAM cards and put them in the 6080s as I swap them.  The reason
for this is I would not have to change the system ID ownership on all
the drives.

I know changing the system ID is generally not a big deal by booting
each head to maintenance mode and reassigning the old SID to the new
SID.  In our case it worries me.  Last week we were going to move a
project to the other head by reassigning the appropriate drives for a
couple of aggregates.  While trying to reassign these the SAS buses
started panic'ing and crashed the controlling filer.  The entire
cluster was down.  The ensuing mess took several hours to clean up.

If it crashed while trying to change ownership of a few drives, I'm
afraid of what will happen when it tries to reassign all the old SID
drives for the new NVRAM card.  I was hoping if we could keep the
cards, we could swap heads, not change SIDs, and minimize our chance
of repeating the crash.  I could do the disks one at a time, but I
have 796 drives on this cluster and would rather not.

 Is there a requirement for the hardware to have the bigger memory
cards?  Since there are more CPUs, I can see where maybe something
needs it, I just don't know what.  We will probably have a downtime in
a couple of months where I can put the correct ones back in.

Thanks,

Jeff