It sounds to me like the root volume from the factory was used and the setup routine was run, rather than removing the new drives, connecting the old shelves and making the original root vol work on the new filer, then adding the new drives back in to use as spares or new aggr(s). Is that correct?
One method to make the original root vol work as the root vol on the new filer consists of: - Disconnect or remove any new disks from the factory - at least any that contain a root aggr/vol. - Connect the original disks with the original aggrs and vols. - Boot into maintenance mode and reassign the old disks to the new sysid. - Reboot and verify the old disks are running as the root vol, and that hostname, IP, IQN, WWN, etc. are all as expected - Connect or insert the new disks. - Clobber the factory root vol/aggr and use the disks for new aggrs or add to old aggrs as desired and supported.
What happens when you connect all (old and new) the disks at the same time is that the disks from the factory are already owned by the new controller, and have a valid root vol. So it ignores the original root vol which it doesn't actually own. So you power it all up and it runs setup as a new filer, and things like IQN and FC WWN are generated fresh using the NVRAM serial number. New NVRAM card + running setup on a new filer = new IQN and WWN. Even though you gave it the same IP, hostname, etc. the IQN and WWN are not user-determined by default.
Once generated, these identities are kept on disk. So, if you later swap NVRAM or the whole controller but keep the existing root vol, the identity is kept.
Another option is to use "config dump" and "config restore"
I can't answer the SnapDrive questions as I haven't specialized in that for a few years.
Peter
-----Original Message----- From: Raj Patel [mailto:phigmov@gmail.com] Sent: Saturday, November 07, 2009 1:40 AM To: toasters@mathworks.com Subject: Controller upgrade complete but . . .
Hi all,
Hoping you can save me some legwork -
We've upgraded our 270c to a 2050ha - data looks good. We did run into the lun signaturing issue with VMWare but forewarned we were able to place the old sigs back onto the luns.
My biggest hassle is that the Windows boxes with iSCSI attached LUN's appear to have lost their luns (ie SQL & Exchange). So the IQN string changed - we set it back to what it should be - but SnapDrive appears to have lost its drive mappings and the SAN lost its iSCSI initiator groups (luckily we can put them back manually from previous autosupports).
So wondering a couple of things
1. whats the fastest way to get SnapDrive to remember its luns (even the MS iSCSI initiator seems to know what it thinks it should be seeing but SD 6 has lost all its config) ? I know I can manually re-connect them all but with DBA's and Exchange Admins having weird and wonderful drive mappings this is going to be time-consuming.
2. why did the SAN lose its initiator group config (across all luns and including chap security for our dmz esx servers) and why did SnapDrive lose its config (ie the SAN it connects to, the luns, igroup and drive mappings) as a result of a controller and ontap upgrade ?
Next time I'll be sure to screenshot every lun mapping in Windows but surely I shouldn't need to do this ?
Any post-upgrade tips to make my Sunday reconnecting systems a little more pleasant will be gratefully received.
Cheers, Raj.