My question is specifically around 8CM but I would be surprised if it applied differently to 7mode.
Can an aggregate be renamed without impacting NFS access by clients? Volumes cannot, obviously, but nowhere is a client aware of aggregate info so had I to guess I'd say it could be renamed with impunity.
But I'd rather not guess...
Thanks.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
I've never has issues renaming aggregates in 7 or GX (DOT 10) On Jul 7, 2011 5:35 PM, "Kennedy, Jeffrey" jkennedy@qualcomm.com wrote:
My question is specifically around 8CM but I would be surprised if it
applied differently to 7mode.
Can an aggregate be renamed without impacting NFS access by clients?
Volumes cannot, obviously, but nowhere is a client aware of aggregate info so had I to guess I'd say it could be renamed with impunity.
But I'd rather not guess...
Thanks.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
filer*> aggr status
Aggr State Status Options
aggr1 online raid_dp, aggr root, raidsize=13
redirect
filer*> aggr rename aggr1 aggr2
'aggr1' renamed to 'aggr2'
filer*> aggr rename aggr2 aggr1
'aggr2' renamed to 'aggr1'
filer*>
Seems to work just fine - I am in 7 mode - it's even the root aggregate.
Graham
From: tmac [mailto:tmacmd@gmail.com] Sent: Thursday, July 07, 2011 4:15 PM To: Kennedy, Jeffrey Cc: NDMP List (toasters@mathworks.com) Subject: Re: aggregate rename
I've never has issues renaming aggregates in 7 or GX (DOT 10)
On Jul 7, 2011 5:35 PM, "Kennedy, Jeffrey" jkennedy@qualcomm.com wrote:
My question is specifically around 8CM but I would be surprised if it
applied differently to 7mode.
Can an aggregate be renamed without impacting NFS access by clients?
Volumes cannot, obviously, but nowhere is a client aware of aggregate info so had I to guess I'd say it could be renamed with impunity.
But I'd rather not guess...
Thanks.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
Thanks to everyone who responded. Looks like renaming an aggregate has zero impact on concurrent client access.
Thanks.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Kennedy, Jeffrey Sent: Thursday, July 07, 2011 1:59 PM To: NDMP List (toasters@mathworks.com) Subject: aggregate rename
My question is specifically around 8CM but I would be surprised if it applied differently to 7mode.
Can an aggregate be renamed without impacting NFS access by clients? Volumes cannot, obviously, but nowhere is a client aware of aggregate info so had I to guess I'd say it could be renamed with impunity.
But I'd rather not guess...
Thanks.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
I can also confirm that 8.0 C Mode renaming aggregates will not impact client access to the volume.
-Blake
On Thu, Jul 7, 2011 at 4:26 PM, Kennedy, Jeffrey jkennedy@qualcomm.com wrote:
Thanks to everyone who responded. Looks like renaming an aggregate has zero impact on concurrent client access.
Thanks.
Jeff Kennedy
Qualcomm, Incorporated
QCT Engineering Compute
858-651-6592
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Kennedy, Jeffrey Sent: Thursday, July 07, 2011 1:59 PM To: NDMP List (toasters@mathworks.com) Subject: aggregate rename
My question is specifically around 8CM but I would be surprised if it applied differently to 7mode.
Can an aggregate be renamed without impacting NFS access by clients? Volumes cannot, obviously, but nowhere is a client aware of aggregate info so had I to guess I’d say it could be renamed with impunity.
But I’d rather not guess…
Thanks.
Jeff Kennedy
Qualcomm, Incorporated
QCT Engineering Compute
858-651-6592
As others have said, aggr renames are OK.
However, I'm not entirely sure why volume renames would be a problem since clients see the junction path rather than the actual volume name.
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Kennedy, Jeffrey Sent: 08 July 2011 00:26 To: NDMP List (toasters@mathworks.com) Subject: RE: aggregate rename
Thanks to everyone who responded. Looks like renaming an aggregate has zero impact on concurrent client access.
Thanks.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Kennedy, Jeffrey Sent: Thursday, July 07, 2011 1:59 PM To: NDMP List (toasters@mathworks.com) Subject: aggregate rename
My question is specifically around 8CM but I would be surprised if it applied differently to 7mode.
Can an aggregate be renamed without impacting NFS access by clients? Volumes cannot, obviously, but nowhere is a client aware of aggregate info so had I to guess I'd say it could be renamed with impunity.
But I'd rather not guess...
Thanks.
Jeff Kennedy Qualcomm, Incorporated QCT Engineering Compute 858-651-6592
To report this email as spam click herehttps://www.mailcontrol.com/sr/wQw0zmjPoHdJTZGyOCrrhg==.
Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom More information can be found at www.csr.com. Follow CSR on Twitter at http://twitter.com/CSR_PLC and read our blog at www.csr.com/blog
Hey Darren,
On Fri, Jul 8, 2011 at 11:43, Darren Sykes Darren.Sykes@csr.com wrote:
As others have said, aggr renames are OK.
However, I’m not entirely sure why volume renames would be a problem since clients see the junction path rather than the actual volume name.
IIRC, by default a volume rename also causes a rename in the /etc/exports and a re-export, clients certainly would notice that.
Greetings,
Nils
Just an FYI to clear things up about this...
In 7 mode, there is an exports file, and that gets updated when the option nfs.export.auto-update is set to on.
In ONTAP Cluster Mode, there is no exports file. Each volume is mounted to a junction path, which is a volume attribute. Thus, renaming a volume object would not alter that path, making renames a seamless process to volumes unless that junction path is changed.
So Nils is correct for 7mode, but in CMODE, whole new ballgame.
-----Original Message----- From: Nils Vogels [mailto:bacardicoke@gmail.com] Sent: Friday, July 08, 2011 6:47 AM To: Darren Sykes Cc: Toasters List Subject: Re: aggregate rename
Hey Darren,
On Fri, Jul 8, 2011 at 11:43, Darren Sykes Darren.Sykes@csr.com wrote:
As others have said, aggr renames are OK.
However, I'm not entirely sure why volume renames would be a problem
since
clients see the junction path rather than the actual volume name.
IIRC, by default a volume rename also causes a rename in the /etc/exports and a re-export, clients certainly would notice that.
Greetings,
Nils
hello I have a FAS6070 with DataOntap 7.2. I want to upgrade to DataOntap 7.3.5.1.
The update fails.
This is the error.
CPU Type: AMD Opteron(tm) Processor 852
Starting AUTOBOOT press Ctrl-C to abort... Loader:elf64 Filesys:fat Dev:ide0.0 File:X86_64/kernel/primary.krn Options:(null) Loading: Failed. Loader:elf64 Filesys:fat Dev:ide0.0 File:backup/X86_64/kernel/primary.krn Options:(null) Loading: 0x200000/32064968 0x20945c8/34790016 0x41c2048/2371097 0x4404e61/7 Entry at 0x00202018 Starting program at 0x00202018 Press CTRL-C for special boot menu
NetApp Release 7.2: Mon Jul 31 16:36:02 PDT 2006 Copyright (c) 1992-2006 Network Appliance, Inc. Starting boot on Thu Jul 14 21:26:04 GMT 2011 Thu Jul 14 21:26:49 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:26:51 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:26:52 GMT [disk.init.failureBytes:error]: Disk 0e.39 failed due to failure byte setting. Thu Jul 14 21:26:56 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:26:58 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:26:58 GMT [diskown.isEnabled:info]: software ownership has been enabled for this system sk_allocate_memory_hole: hole 0x000000007cd82000 end 0x000000007d582000 (first memory range)
(1) Normal boot. (2) Boot without /etc/rc. (3) Change password. (4) Initialize owned disks (69 disks are owned by this filer). (4a) Same as option 4, but create a flexible root volume. (5) Maintenance mode boot.
Selection (1-5)? Thu Jul 14 21:27:09 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:13 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. 1 Thu Jul 14 21:27:17 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:19 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:27:19 GMT [fmmbx_instanceWorke:info]: normal mailbox instance on local side Thu Jul 14 21:27:19 GMT [fmmb.current.lock.disk:info]: Disk 0a.17 is a local HA mailbox disk. Thu Jul 14 21:27:19 GMT [fmmb.current.lock.disk:info]: Disk 0a.32 is a local HA mailbox disk. Thu Jul 14 21:27:20 GMT [coredump.spare.none:info]: No sparecore disk was found. Thu Jul 14 21:27:21 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks. Thu Jul 14 21:27:21 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes. Thu Jul 14 21:27:24 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:26 GMT [rc:notice]: The system was down for 988 seconds Thu Jul 14 21:27:26 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0e.24 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards.
Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0a.18 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards.
Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0f.50 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards.
Thu Jul 14 16:57:30 VET [dfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk drives
Thu Jul 14 16:57:30 VET [sfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk shelves.
Thu Jul 14 16:57:30 VET [GbE/e5b:info]: Ethernet e5b: Link up
Thu Jul 14 16:57:30 VET [GbE/e6b:info]: Ethernet e6b: Link up
add net default: gateway 129.90.60.1
Thu Jul 14 16:57:30 VET [nis.server.active:notice]: Bound to preferred NIS server 129.90.50.73
Thu Jul 14 16:57:31 VET [nis_worker_0:info]: Local NIS group update successful.
exportfs [Line 3]: no such directory, /vol/unix_bnd not exported
Thu Jul 14 16:57:31 VET [nis_worker_0:info]: Local NIS group update successful.
Thu Jul 14 16:57:31 VET [iscsi.service.startup:info]: iSCSI service startup
exportfs [Line 5]: no such directory, /vol/unix_explor/oritupano not exported
Thu Jul 14 16:57:32 VET [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b.
Thu Jul 14 16:57:32 VET [coredump.save.started:info]: Saving 26594M to /etc/crash/core.118045195.2011-07-01.19_38_13.nz ("PageFault (read invalid page) on address 0x28 code = 0 eip = bf0edf cs = 8 eflags = 10202 in process mnt_assist on release NetApp Release 7.2")
Thu Jul 14 16:57:32 VET [coredump.save.attempts.count:notice]: Too many attempts to save this core
Thu Jul 14 16:57:32 VET [coredump.save.error:notice]: /etc/crash/core.118045195.2011-07-01.19_38_13.nz processing encountered error
Thu Jul 14 16:57:32 VET [mgr.boot.disk_done:info]: NetApp Release 7.2 boot complete. Last disk update written at Thu Jul 14 16:40:53 VET 2011
download: Booted from a secondary boot device.
download: The primary boot device may be corrupt.
Thu Jul 14 16:57:32 VET [mgr.boot.reason_ok:notice]: System rebooted.
CIFS local server is running.
******<<<< ACCESO SOLO PARA PERSONAL DE ALMACENAMIENTO Y RESPALDO AIT-INTEVEP >>>>******
******<<<< SU ENTRADA ESTA SIENDO MONITOREADA >>>>*****
Data ONTAP (netapp05.pdvsa.com)
login: Thu Jul 14 16:57:33 VET [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.
Data ONTAP (netapp05.pdvsa.com)
login: Thu Jul 14 16:57:39 VET [rlm.driver.network.failure:warning]: The network configuration of the Remote LAN Module (RLM) failed due to cable or network errors.
Thu Jul 14 16:57:39 VET [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b.
Thu Jul 14 16:57:41 VET [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.
Data ONTAP (netapp05.pdvsa.com)
login:
__________ Informacion de ESET NOD32 Antivirus, version de la base de firmas de virus 6298 (20110715) __________
ESET NOD32 Antivirus ha comprobado este mensaje.
Did you heed all warnings and cautions before upgrading to 7.3.x? i.e. Do you have any supported ESH/LRC modules and/or DS14 disk shelves?
If you do, then you cannot upgrade. It looks like it did reboot on the backup image (still the 7.2 variant).
You might want to check your sysconfig for ESH/LRCs and then physically check the shelves for DS14 (aka DS14mk1)
--tmac Tim McCarthy Principal Consultant
RedHat Certified Engineer 804006984323821 (RHEL4) 805007643429572 (RHEL5)
On Fri, Jul 15, 2011 at 1:07 PM, Carlos Tarazona ctarazona@csrven.comwrote:
hello I have a FAS6070 with DataOntap 7.2. I want to upgrade to DataOntap 7.3.5.1.
The update fails.
This is the error.
CPU Type: AMD Opteron(tm) Processor 852
Starting AUTOBOOT press Ctrl-C to abort... Loader:elf64 Filesys:fat Dev:ide0.0 File:X86_64/kernel/primary.krn Options:(null) Loading: Failed. Loader:elf64 Filesys:fat Dev:ide0.0 File:backup/X86_64/kernel/primary.krn Options:(null) Loading: 0x200000/32064968 0x20945c8/34790016 0x41c2048/2371097 0x4404e61/7 Entry at 0x00202018 Starting program at 0x00202018 Press CTRL-C for special boot menu
NetApp Release 7.2: Mon Jul 31 16:36:02 PDT 2006 Copyright (c) 1992-2006 Network Appliance, Inc. Starting boot on Thu Jul 14 21:26:04 GMT 2011 Thu Jul 14 21:26:49 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:26:51 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:26:52 GMT [disk.init.failureBytes:error]: Disk 0e.39 failed due to failure byte setting. Thu Jul 14 21:26:56 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:26:58 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:26:58 GMT [diskown.isEnabled:info]: software ownership has been enabled for this system sk_allocate_memory_hole: hole 0x000000007cd82000 end 0x000000007d582000 (first memory range)
(1) Normal boot. (2) Boot without /etc/rc. (3) Change password. (4) Initialize owned disks (69 disks are owned by this filer). (4a) Same as option 4, but create a flexible root volume. (5) Maintenance mode boot.
Selection (1-5)? Thu Jul 14 21:27:09 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:13 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. 1 Thu Jul 14 21:27:17 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:19 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:27:19 GMT [fmmbx_instanceWorke:info]: normal mailbox instance on local side Thu Jul 14 21:27:19 GMT [fmmb.current.lock.disk:info]: Disk 0a.17 is a local HA mailbox disk. Thu Jul 14 21:27:19 GMT [fmmb.current.lock.disk:info]: Disk 0a.32 is a local HA mailbox disk. Thu Jul 14 21:27:20 GMT [coredump.spare.none:info]: No sparecore disk was found. Thu Jul 14 21:27:21 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks. Thu Jul 14 21:27:21 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes. Thu Jul 14 21:27:24 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:26 GMT [rc:notice]: The system was down for 988 seconds Thu Jul 14 21:27:26 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0e.24 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards.
Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0a.18 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards.
Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0f.50 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards.
Thu Jul 14 16:57:30 VET [dfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk drives
Thu Jul 14 16:57:30 VET [sfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk shelves.
Thu Jul 14 16:57:30 VET [GbE/e5b:info]: Ethernet e5b: Link up
Thu Jul 14 16:57:30 VET [GbE/e6b:info]: Ethernet e6b: Link up
add net default: gateway 129.90.60.1
Thu Jul 14 16:57:30 VET [nis.server.active:notice]: Bound to preferred NIS server 129.90.50.73
Thu Jul 14 16:57:31 VET [nis_worker_0:info]: Local NIS group update successful.
exportfs [Line 3]: no such directory, /vol/unix_bnd not exported
Thu Jul 14 16:57:31 VET [nis_worker_0:info]: Local NIS group update successful.
Thu Jul 14 16:57:31 VET [iscsi.service.startup:info]: iSCSI service startup
exportfs [Line 5]: no such directory, /vol/unix_explor/oritupano not exported
Thu Jul 14 16:57:32 VET [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b.
Thu Jul 14 16:57:32 VET [coredump.save.started:info]: Saving 26594M to /etc/crash/core.118045195.2011-07-01.19_38_13.nz ("PageFault (read invalid page) on address 0x28 code = 0 eip = bf0edf cs = 8 eflags = 10202 in process mnt_assist on release NetApp Release 7.2")
Thu Jul 14 16:57:32 VET [coredump.save.attempts.count:notice]: Too many attempts to save this core
Thu Jul 14 16:57:32 VET [coredump.save.error:notice]: /etc/crash/core.118045195.2011-07-01.19_38_13.nz processing encountered error
Thu Jul 14 16:57:32 VET [mgr.boot.disk_done:info]: NetApp Release 7.2 boot complete. Last disk update written at Thu Jul 14 16:40:53 VET 2011
download: Booted from a secondary boot device.
download: The primary boot device may be corrupt.
Thu Jul 14 16:57:32 VET [mgr.boot.reason_ok:notice]: System rebooted.
CIFS local server is running.
******<<<< ACCESO SOLO PARA PERSONAL DE ALMACENAMIENTO Y RESPALDO AIT-INTEVEP >>>>******
******<<<< SU ENTRADA ESTA SIENDO MONITOREADA >>>>*****
Data ONTAP (netapp05.pdvsa.com)
login: Thu Jul 14 16:57:33 VET [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.
Data ONTAP (netapp05.pdvsa.com)
login: Thu Jul 14 16:57:39 VET [rlm.driver.network.failure:warning]: The network configuration of the Remote LAN Module (RLM) failed due to cable or network errors.
Thu Jul 14 16:57:39 VET [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b.
Thu Jul 14 16:57:41 VET [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.
Data ONTAP (netapp05.pdvsa.com)
login:
__________ Informacion de ESET NOD32 Antivirus, version de la base de firmas de virus 6298 (20110715) __________
ESET NOD32 Antivirus ha comprobado este mensaje.
hello Carlos
7.2 is a very old release and it looks like it failed to properly install the boot image onto the Compact Flash card.
I would first of all upgrade to 7.2.7 (https://now.netapp.com/NOW/download/software/ontap/7.2.7/x86-64/727_setup_q....) and then retry the upgrade to 7.3.5.1 (https://now.netapp.com/NOW/download/software/ontap/7.3.5.1/x86-64/7351_setup...).
I would go the route of copying the file to /etc/software and then doing a software install, a download, a storage download shelf, drink some tea/coffee and then when the shelves are updated the reboot.
Hope this helps.
¡Buena suerte! Cheers Kenneth
From: ctarazona@csrven.com To: toasters@mathworks.com Subject: DataOtap update fails Date: Fri, 15 Jul 2011 12:37:06 -0430
hello I have a FAS6070 with DataOntap 7.2. I want to upgrade to DataOntap 7.3.5.1.
The update fails.
This is the error.
CPU Type: AMD Opteron(tm) Processor 852 Starting AUTOBOOT press Ctrl-C to abort... Loader:elf64 Filesys:fat Dev:ide0.0 File:X86_64/kernel/primary.krn Options:(null) Loading: Failed. Loader:elf64 Filesys:fat Dev:ide0.0 File:backup/X86_64/kernel/primary.krn Options:(null) Loading: 0x200000/32064968 0x20945c8/34790016 0x41c2048/2371097 0x4404e61/7 Entry at 0x00202018 Starting program at 0x00202018 Press CTRL-C for special boot menu
NetApp Release 7.2: Mon Jul 31 16:36:02 PDT 2006 Copyright (c) 1992-2006 Network Appliance, Inc. Starting boot on Thu Jul 14 21:26:04 GMT 2011 Thu Jul 14 21:26:49 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:26:51 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:26:52 GMT [disk.init.failureBytes:error]: Disk 0e.39 failed due to failure byte setting. Thu Jul 14 21:26:56 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:26:58 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:26:58 GMT [diskown.isEnabled:info]: software ownership has been enabled for this system sk_allocate_memory_hole: hole 0x000000007cd82000 end 0x000000007d582000 (first memory range)
(1) Normal boot. (2) Boot without /etc/rc. (3) Change password. (4) Initialize owned disks (69 disks are owned by this filer). (4a) Same as option 4, but create a flexible root volume. (5) Maintenance mode boot.
Selection (1-5)? Thu Jul 14 21:27:09 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:13 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.1 Thu Jul 14 21:27:17 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:19 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:27:19 GMT [fmmbx_instanceWorke:info]: normal mailbox instance on local side Thu Jul 14 21:27:19 GMT [fmmb.current.lock.disk:info]: Disk 0a.17 is a local HA mailbox disk. Thu Jul 14 21:27:19 GMT [fmmb.current.lock.disk:info]: Disk 0a.32 is a local HA mailbox disk. Thu Jul 14 21:27:20 GMT [coredump.spare.none:info]: No sparecore disk was found. Thu Jul 14 21:27:21 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks. Thu Jul 14 21:27:21 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes. Thu Jul 14 21:27:24 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:26 GMT [rc:notice]: The system was down for 988 seconds Thu Jul 14 21:27:26 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0e.24 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards. Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0a.18 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards. Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0f.50 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards.
Thu Jul 14 16:57:30 VET [dfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk drives Thu Jul 14 16:57:30 VET [sfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk shelves. Thu Jul 14 16:57:30 VET [GbE/e5b:info]: Ethernet e5b: Link up Thu Jul 14 16:57:30 VET [GbE/e6b:info]: Ethernet e6b: Link up
add net default: gateway 129.90.60.1
Thu Jul 14 16:57:30 VET [nis.server.active:notice]: Bound to preferred NIS server 129.90.50.73
Thu Jul 14 16:57:31 VET [nis_worker_0:info]: Local NIS group update successful. exportfs [Line 3]: no such directory, /vol/unix_bnd not exported Thu Jul 14 16:57:31 VET [nis_worker_0:info]: Local NIS group update successful. Thu Jul 14 16:57:31 VET [iscsi.service.startup:info]: iSCSI service startup exportfs [Line 5]: no such directory, /vol/unix_explor/oritupano not exported Thu Jul 14 16:57:32 VET [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 16:57:32 VET [coredump.save.started:info]: Saving 26594M to /etc/crash/core.118045195.2011-07-01.19_38_13.nz ("PageFault (read invalid page) on address 0x28 code = 0 eip = bf0edf cs = 8 eflags = 10202 in process mnt_assist on release NetApp Release 7.2") Thu Jul 14 16:57:32 VET [coredump.save.attempts.count:notice]: Too many attempts to save this core Thu Jul 14 16:57:32 VET [coredump.save.error:notice]: /etc/crash/core.118045195.2011-07-01.19_38_13.nz processing encountered error
Thu Jul 14 16:57:32 VET [mgr.boot.disk_done:info]: NetApp Release 7.2 boot complete. Last disk update written at Thu Jul 14 16:40:53 VET 2011
download: Booted from a secondary boot device. download: The primary boot device may be corrupt.
Thu Jul 14 16:57:32 VET [mgr.boot.reason_ok:notice]: System rebooted. CIFS local server is running.
Data ONTAP (netapp05.pdvsa.com)
login: Thu Jul 14 16:57:33 VET [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.
Data ONTAP (netapp05.pdvsa.com)
login: Thu Jul 14 16:57:39 VET [rlm.driver.network.failure:warning]: The network configuration of the Remote LAN Module (RLM) failed due to cable or network errors.
Thu Jul 14 16:57:39 VET [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 16:57:41 VET [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.
Hi
One thing I forgot to mention is that as per http://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/upgrade/GU... you should also do an update_flash from the firmware LOADER prompt to update the system BIOS firmware.
You should do this during both upgrades.
regards Kenneth
----------------------------------------
From: kheal@hotmail.com To: ctarazona@csrven.com; toasters@mathworks.com Subject: RE: Data Ontap update fails FAS6070 7.2 -> 7.3.5.1 Date: Fri, 15 Jul 2011 21:57:10 +0200
hello Carlos
7.2 is a very old release and it looks like it failed to properly install the boot image onto the Compact Flash card.
I would first of all upgrade to 7.2.7 (https://now.netapp.com/NOW/download/software/ontap/7.2.7/x86-64/727_setup_q....) and then retry the upgrade to 7.3.5.1 (https://now.netapp.com/NOW/download/software/ontap/7.3.5.1/x86-64/7351_setup...).
I would go the route of copying the file to /etc/software and then doing a software install, a download, a storage download shelf, drink some tea/coffee and then when the shelves are updated the reboot.
Hope this helps.
¡Buena suerte! Cheers Kenneth
From: ctarazona@csrven.com To: toasters@mathworks.com Subject: DataOtap update fails Date: Fri, 15 Jul 2011 12:37:06 -0430
hello I have a FAS6070 with DataOntap 7.2. I want to upgrade to DataOntap 7.3.5.1.
The update fails.
This is the error.
CPU Type: AMD Opteron(tm) Processor 852 Starting AUTOBOOT press Ctrl-C to abort... Loader:elf64 Filesys:fat Dev:ide0.0 File:X86_64/kernel/primary.krn Options:(null) Loading: Failed. Loader:elf64 Filesys:fat Dev:ide0.0 File:backup/X86_64/kernel/primary.krn Options:(null) Loading: 0x200000/32064968 0x20945c8/34790016 0x41c2048/2371097 0x4404e61/7 Entry at 0x00202018 Starting program at 0x00202018 Press CTRL-C for special boot menu
NetApp Release 7.2: Mon Jul 31 16:36:02 PDT 2006 Copyright (c) 1992-2006 Network Appliance, Inc. Starting boot on Thu Jul 14 21:26:04 GMT 2011 Thu Jul 14 21:26:49 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:26:51 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:26:52 GMT [disk.init.failureBytes:error]: Disk 0e.39 failed due to failure byte setting. Thu Jul 14 21:26:56 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:26:58 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:26:58 GMT [diskown.isEnabled:info]: software ownership has been enabled for this system sk_allocate_memory_hole: hole 0x000000007cd82000 end 0x000000007d582000 (first memory range)
(1) Normal boot. (2) Boot without /etc/rc. (3) Change password. (4) Initialize owned disks (69 disks are owned by this filer). (4a) Same as option 4, but create a flexible root volume. (5) Maintenance mode boot.
Selection (1-5)? Thu Jul 14 21:27:09 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:13 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.1 Thu Jul 14 21:27:17 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:19 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:27:19 GMT [fmmbx_instanceWorke:info]: normal mailbox instance on local side Thu Jul 14 21:27:19 GMT [fmmb.current.lock.disk:info]: Disk 0a.17 is a local HA mailbox disk. Thu Jul 14 21:27:19 GMT [fmmb.current.lock.disk:info]: Disk 0a.32 is a local HA mailbox disk. Thu Jul 14 21:27:20 GMT [coredump.spare.none:info]: No sparecore disk was found. Thu Jul 14 21:27:21 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks. Thu Jul 14 21:27:21 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes. Thu Jul 14 21:27:24 GMT [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 21:27:26 GMT [rc:notice]: The system was down for 988 seconds Thu Jul 14 21:27:26 GMT [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14. Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0e.24 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards. Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0a.18 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards. Thu Jul 14 21:27:26 GMT [config.sameHA:warning]: Disk 0f.50 and other disks on this loop are dual-attached to the same adapter card. For improved availability they should be dual-attached to separate adapter cards.
Thu Jul 14 16:57:30 VET [dfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk drives Thu Jul 14 16:57:30 VET [sfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk shelves. Thu Jul 14 16:57:30 VET [GbE/e5b:info]: Ethernet e5b: Link up Thu Jul 14 16:57:30 VET [GbE/e6b:info]: Ethernet e6b: Link up
add net default: gateway 129.90.60.1
Thu Jul 14 16:57:30 VET [nis.server.active:notice]: Bound to preferred NIS server 129.90.50.73
Thu Jul 14 16:57:31 VET [nis_worker_0:info]: Local NIS group update successful. exportfs [Line 3]: no such directory, /vol/unix_bnd not exported Thu Jul 14 16:57:31 VET [nis_worker_0:info]: Local NIS group update successful. Thu Jul 14 16:57:31 VET [iscsi.service.startup:info]: iSCSI service startup exportfs [Line 5]: no such directory, /vol/unix_explor/oritupano not exported Thu Jul 14 16:57:32 VET [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 16:57:32 VET [coredump.save.started:info]: Saving 26594M to /etc/crash/core.118045195.2011-07-01.19_38_13.nz ("PageFault (read invalid page) on address 0x28 code = 0 eip = bf0edf cs = 8 eflags = 10202 in process mnt_assist on release NetApp Release 7.2") Thu Jul 14 16:57:32 VET [coredump.save.attempts.count:notice]: Too many attempts to save this core Thu Jul 14 16:57:32 VET [coredump.save.error:notice]: /etc/crash/core.118045195.2011-07-01.19_38_13.nz processing encountered error
Thu Jul 14 16:57:32 VET [mgr.boot.disk_done:info]: NetApp Release 7.2 boot complete. Last disk update written at Thu Jul 14 16:40:53 VET 2011
download: Booted from a secondary boot device. download: The primary boot device may be corrupt.
Thu Jul 14 16:57:32 VET [mgr.boot.reason_ok:notice]: System rebooted. CIFS local server is running.
Data ONTAP (netapp05.pdvsa.com)
login: Thu Jul 14 16:57:33 VET [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.
Data ONTAP (netapp05.pdvsa.com)
login: Thu Jul 14 16:57:39 VET [rlm.driver.network.failure:warning]: The network configuration of the Remote LAN Module (RLM) failed due to cable or network errors.
Thu Jul 14 16:57:39 VET [fci.adapter.reset:info]: Resetting Fibre Channel adapter 0b. Thu Jul 14 16:57:41 VET [fci.device.loop.recovery:error]: Loop recovery event caused by the device upstream from enclosure services device 0b.14.