On 10/08/97 09:10:10 you wrote:
We have "raid swap"ed almost three shelves worth of drives into an F330 at one time. The AE recommended this method as being faster than installing all the drives at config time and building that large file system from scratch. This worked fine, but still takes a significant amount of time.
I think by "at one time" you still mean one at a time (a raid swap for each drive).
We later added a fourth shelf and "raid swap"ed all those drives in at once. The filer was only off the net for the length of time it took to halt/connect the SCSI cable/ reboot. A few hours later, after the raid swap completes, all the additional disk space shows up - it does not show up one drive at a time.
I don't quite understand this... if you added them while the filer was down, you didn't "raid swap" anything. And they show up immediately, not a few hours later. I think perhaps you are talking about *raid add*, i.e. actually adding the drives to the filesystem. In that case you can add them individually one drive at a time (and the filer would get the space on drive at a time), or you can raid add them all at once, in which case they'll all finish at about the same time.
Some comments regarding DLT drives - I have heard that you want to keep them "streaming" (feed with data) or the drive has to stop, wait for data, backup, reseek to the stop point, and start again. This applies to both 4000/7000 series. You pay a significant speed penalty for dropping the drive out of streaming mode. Our word from Netapp is that for an F230, they sell the FW/DIff card and it works, but it is not officially QA'd in the system, but soon should be.
Yes, streaming is important, but my point was I don't think a 7000 non-streaming is going to be much slower than a 4000 streaming. But, I could be wrong on this. In any case, there are *plenty* are start-stop behavior in a Netapp dump even locally, so I don't think you can avoid it.
Bruce
On Wed, 8 Oct 1997 sirbruce@ix.netcom.com wrote:
I think by "at one time" you still mean one at a time (a raid swap for each drive).
We have a bunch of filers sitting idle waiting for some back-ordered components, so I've taken the opportunity to "stress" them and see what breaks. ;-) On a quiet filer (no exports), I started with a two-disk configuration (it doesn't seem to mind having only one parity and one data disk) and hot-plugged four more drives in rapid succession without a `raid swap'.
The usual warnings on the first write to a new disk were logged, but then the filer complained about all three SCSI buses (only 9a was in use at the time). Beyond that there were no other difficulties.
Mon Nov 3 18:09:09 GMT [disk_config_admin]: *** NOTICE *** A disk has been swapped (removed or added) to a modular storage shelf. The system will wait 15 seconds and then check the status of all disk drives. Mon Nov 3 18:10:42 GMT [disk_config_admin]: *** NOTICE *** Disk unit status check has found 4 new disks. Mon Nov 3 18:10:42 GMT [disk_config_admin]: *** NOTICE *** A disk has been swapped (removed or added) to a modular storage shelf. The system will wait 15 seconds and then check the status of all disk drives. Mon Nov 3 18:10:44 GMT [disk_config_admin]: Resetting SCSI adapter 9a (ha #1) Mon Nov 3 18:10:44 GMT [disk_config_admin]: Resetting SCSI bus 9a (ha #1) Mon Nov 3 18:10:46 GMT [isp_main]: Disk 9a.2(0x4aabd0): WRITE sector 0 unit attention (6 29, 0) Mon Nov 3 18:10:46 GMT [isp_main]: Disk 9a.2(0x4aabd0): request succeeded after retry #1 Mon Nov 3 18:10:46 GMT [isp_main]: Disk 9a.1(0x4aad50): READ sector 0 unit attention (6 29, 0) Mon Nov 3 18:10:46 GMT [isp_main]: Disk 9a.1(0x4aad50): request succeeded after retry #1 Mon Nov 3 18:10:46 GMT [disk_config_admin]: isp_reset_device: 9a.2 (1.2) failed Mon Nov 3 18:10:46 GMT [isp_main]: isp_error_proc: isp 9a (ha #1) scsi bus reset occured ( 0x200 0x40 0x11 ) Mon Nov 3 18:10:46 GMT [isp_main]: 9a.2: Unexpected SCSI HA error 0 Mon Nov 3 18:10:47 GMT [isp_main]: Disk 9a.2(0x4adbd0): request succeeded after retry #2 Mon Nov 3 18:10:47 GMT [isp_main]: Disk 9a.1(0x4add50): READ sector 0 unit attention (6 29, 0) Mon Nov 3 18:10:47 GMT [isp_main]: Disk 9a.1(0x4add50): request succeeded after retry #1 Mon Nov 3 18:10:49 GMT [raid_disk_admin]: Spare disk 2 has been added to the system. Mon Nov 3 18:10:49 GMT [isp_main]: Disk 9a.3(0x4b9750): WRITE sector 0 unit attention (6 29, 0) Mon Nov 3 18:10:49 GMT [isp_main]: Disk 9a.3(0x4b9750): request succeeded after retry #1 Mon Nov 3 18:10:53 GMT [raid_disk_admin]: Spare disk 3 has been added to the system. Mon Nov 3 18:10:53 GMT [isp_main]: Disk 9a.4(0x4ad150): WRITE sector 0 unit attention (6 29, 0) Mon Nov 3 18:10:53 GMT [isp_main]: Disk 9a.4(0x4ad150): request succeeded after retry #1 Mon Nov 3 18:10:55 GMT [isp_main]: Disk 9a.0(0x4b9d50): WRITE sector 0 unit attention (6 29, 0) Mon Nov 3 18:10:55 GMT [isp_main]: Disk 9a.0(0x4b9d50): request succeeded after retry #1 Mon Nov 3 18:10:57 GMT [raid_disk_admin]: Spare disk 4 has been added to the system. Mon Nov 3 18:10:57 GMT [isp_main]: Disk 9a.5(0x4b8250): WRITE sector 0 unit attention (6 29, 0) Mon Nov 3 18:10:57 GMT [isp_main]: Disk 9a.5(0x4b8250): request succeeded after retry #1 Mon Nov 3 18:10:59 GMT [isp_main]: isp_error_proc: isp 0 (ha #0) completed unsuccessfully ( 0x0 0x100 0x40 ) Mon Nov 3 18:11:00 GMT last message repeated 6 times Mon Nov 3 18:11:00 GMT [raid_disk_admin]: Spare disk 5 has been added to the system. Mon Nov 3 18:11:00 GMT [isp_main]: isp_error_proc: isp 0 (ha #0) completed unsuccessfully ( 0x0 0x100 0x40 ) Mon Nov 3 18:11:06 GMT last message repeated 22 times Mon Nov 3 18:11:06 GMT [isp_main]: isp_error_proc: isp 9a (ha #1) completed unsuccessfully ( 0x0 0x100 0x40 ) Mon Nov 3 18:11:11 GMT last message repeated 17 times Mon Nov 3 18:11:11 GMT [isp_main]: isp_error_proc: isp 9b (ha #2) completed unsuccessfully ( 0x0 0x100 0x40 ) Mon Nov 3 18:11:18 GMT last message repeated 29 times Mon Nov 3 18:11:18 GMT [disk_config_admin]: *** NOTICE *** Disk unit status check has completed.