I believe that happens if you first bring the aggregates online in maintenance mode, rather than in normal mode. Maintenance mode will strip all the mappings, etc.
Is that by design or a bug ? Seems a little bit lame if its intentional behaviour.
Luckily reconnecting the luns and establishing the initiator security groups only took an extra few hours.
Off to enjoy the rest of my Sunday :-)
Cheers, Raj.
On 11/8/09, Webster, Stetson Stetson.Webster@netapp.com wrote:
I believe that happens if you first bring the aggregates online in maintenance mode, rather than in normal mode. Maintenance mode will strip all the mappings, etc.
Stetson Webster Professional Services Consultant Virtualization and Consolidation NCIE-SAN, NCIE-B&R, SCSN-E, VCP NetApp 919.250.0052 Direct Phone stetson@netapp.com Learn more: netapp.com/and
-----Original Message----- From: Raj Patel [mailto:phigmov@gmail.com] Sent: Saturday, November 07, 2009 12:58 PM To: Learmonth, Peter Cc: toasters@mathworks.com Subject: Re: Controller upgrade complete but . . .
Hi Peter,
That sounds fairly plausible - the upgrade itself was performed by a vendor and the upgrade steps we used in our change control look like they were generated by a NetApp wizard. They certainly didn't mention pulling disks out first (which I remember from your post last week concerning the signature issue).
We reset the iqn - that was fine.
Losing the igroup LUN security stuff was more of a hassle - I could understand resetting the iqn based on new hardware but losing that information completely seemed kind of weird (and annoying).
I'm still unsure how it relates back to the SD for Windows - having reset the iqn and recreated the igroup mappings on the SAN I would have thought SD would just reconnect to the appropriate LUN's on the correct drive mappings.
Oh well - looks like the simplest option is just to reconnect the LUN's manually.
Cheers, Raj.
On 11/8/09, Learmonth, Peter Peter.Learmonth@netapp.com wrote:
It sounds to me like the root volume from the factory was used and the setup routine was run, rather than removing the new drives, connecting the old shelves and making the original root vol work on the new filer, then adding the new drives back in to use as spares or new aggr(s). Is that correct?
One method to make the original root vol work as the root vol on the new filer consists of:
- Disconnect or remove any new disks from the factory - at least any
that contain a root aggr/vol.
- Connect the original disks with the original aggrs and vols.
- Boot into maintenance mode and reassign the old disks to the new
sysid.
- Reboot and verify the old disks are running as the root vol, and that
hostname, IP, IQN, WWN, etc. are all as expected
- Connect or insert the new disks.
- Clobber the factory root vol/aggr and use the disks for new aggrs or
add to old aggrs as desired and supported.
What happens when you connect all (old and new) the disks at the same time is that the disks from the factory are already owned by the new controller, and have a valid root vol. So it ignores the original root vol which it doesn't actually own. So you power it all up and it runs setup as a new filer, and things like IQN and FC WWN are generated fresh using the NVRAM serial number. New NVRAM card + running setup on a new filer = new IQN and WWN. Even though you gave it the same IP, hostname, etc. the IQN and WWN are not user-determined by default.
Once generated, these identities are kept on disk. So, if you later swap NVRAM or the whole controller but keep the existing root vol, the identity is kept.
Another option is to use "config dump" and "config restore"
I can't answer the SnapDrive questions as I haven't specialized in that for a few years.
Peter
-----Original Message----- From: Raj Patel [mailto:phigmov@gmail.com] Sent: Saturday, November 07, 2009 1:40 AM To: toasters@mathworks.com Subject: Controller upgrade complete but . . .
Hi all,
Hoping you can save me some legwork -
We've upgraded our 270c to a 2050ha - data looks good. We did run into the lun signaturing issue with VMWare but forewarned we were able to place the old sigs back onto the luns.
My biggest hassle is that the Windows boxes with iSCSI attached LUN's appear to have lost their luns (ie SQL & Exchange). So the IQN string changed - we set it back to what it should be - but SnapDrive appears to have lost its drive mappings and the SAN lost its iSCSI initiator groups (luckily we can put them back manually from previous autosupports).
So wondering a couple of things
- whats the fastest way to get SnapDrive to remember its luns (even
the MS iSCSI initiator seems to know what it thinks it should be seeing but SD 6 has lost all its config) ? I know I can manually re-connect them all but with DBA's and Exchange Admins having weird and wonderful drive mappings this is going to be time-consuming.
- why did the SAN lose its initiator group config (across all luns
and including chap security for our dmz esx servers) and why did SnapDrive lose its config (ie the SAN it connects to, the luns, igroup and drive mappings) as a result of a controller and ontap upgrade ?
Next time I'll be sure to screenshot every lun mapping in Windows but surely I shouldn't need to do this ?
Any post-upgrade tips to make my Sunday reconnecting systems a little more pleasant will be gratefully received.
Cheers, Raj.