What is the largest disks you can put in an FC9 shelf? I'd like to make the jump to 144gb drives from 36gb if possible.
Thanks
art
-----Original Message----- From: Jeff Burton [mailto:burtonj@pprd.abbott.com] Sent: Tuesday, February 04, 2003 7:47 AM To: toasters@mathworks.com Subject: Re: 144GB Fiber Channel drives in the field
Here are the results of the tests I ran to check the raid reconstruct speed of a node in an 880c with 72GB drives. The filer was not being used, so the only load was the reconstruct. This is just the base time to reconstruct. One can find a multiplier that can be used to determine the reconstruct speed with the avg load on their filers....although I don't know how easy that would be to find.
RAID set size Elapsed (time to reconstruct) 2 29:43.70 (29 min, 43.70 sec) 3 39:59.72 4 51:58.87 5 1:04:17.90 6 1:16:40.76 7 1:29:39.19 8 1:43:01.75 9 1:56:02.55 10 2:09:19.00 11 2:22:39.12
As you can see the time to reconstruct appears to go up by about 11-14 minutes when adding an additional disk. More info on tests: cluster 880 all disks were on the same shelf: DS14 all disks were zero'ed before the reconstruct raid.reconstruct.perf_impact medium raid.resync.perf_impact medium
procedure repeated for each raid size: -priv set diag -setflag raid_enable_prezeroing 1 after done all disks zero'ed -setflag raid_enable_prezeroing 0 -vol create vol1 -r 14 11 after volume is created -disk fail (parity disk in vol1) times taken -vol offline vol1 -vol destroy vol1 -disk unfail (disk failed)
72GB is the largest drive supported by FC9. As far as filers go, you need at least an F800 series with an FC14 shelf to use a 144GB drive.
Jeff
On Wed, Jun 18, 2003 at 12:35:24PM -0700, Art Hebert wrote:
What is the largest disks you can put in an FC9 shelf? I'd like to make the jump to 144gb drives from 36gb if possible.
Thanks
art
-----Original Message----- From: Jeff Burton [mailto:burtonj@pprd.abbott.com] Sent: Tuesday, February 04, 2003 7:47 AM To: toasters@mathworks.com Subject: Re: 144GB Fiber Channel drives in the field
Here are the results of the tests I ran to check the raid reconstruct speed of a node in an 880c with 72GB drives. The filer was not being used, so the only load was the reconstruct. This is just the base time to reconstruct. One can find a multiplier that can be used to determine the reconstruct speed with the avg load on their filers....although I don't know how easy that would be to find.
RAID set size Elapsed (time to reconstruct) 2 29:43.70 (29 min, 43.70 sec) 3 39:59.72 4 51:58.87 5 1:04:17.90 6 1:16:40.76 7 1:29:39.19 8 1:43:01.75 9 1:56:02.55 10 2:09:19.00 11 2:22:39.12
As you can see the time to reconstruct appears to go up by about 11-14 minutes when adding an additional disk. More info on tests: cluster 880 all disks were on the same shelf: DS14 all disks were zero'ed before the reconstruct raid.reconstruct.perf_impact medium raid.resync.perf_impact medium
procedure repeated for each raid size: -priv set diag -setflag raid_enable_prezeroing 1 after done all disks zero'ed -setflag raid_enable_prezeroing 0 -vol create vol1 -r 14 11 after volume is created -disk fail (parity disk in vol1) times taken -vol offline vol1 -vol destroy vol1 -disk unfail (disk failed)
--
Jeff Burton UNIX Administrator Dept. GB34/Bldg. AP9A-LL EMAIL: burtonj@pprd.abbott.com Abbott Laboratories PHONE: 847-935-5778 100 Abbott Park Rd. FAX: 847-935-0142 Abbott Park, IL 60064-6115
On Mon, Feb 03, 2003 at 06:52:31PM -0800, kallen@collab.net wrote:
hiya
anyone using the 144GB drives?
currently we have DS14s filled with 36GB drives. these are used in a production environment, so of course we need maximum uptime. we're considering getting more DS14s filled with either 72GB or 144GB drives. we want the most bang for the buck, but i have concerns about the 144GB drives. anyone out there have any experience with them?
are the reconstruct times for a 144GB disk proportionally prohibitive when compared to 72GB and 36GB drives? how long does it take to reconstruct a failed 144GB drive on a hot spare compared to 72GB and 36GB disks, under consistent filer workload and raid reconstruct speed?
are there performance issues to consider if we were to go with 144GB rather than 72GB?
any other pros and cons?
thanks in advance, kallen
Bottom line - about 13 minutes a drive +_ 7 secs avg.
Hunter M. Wylie 21193 French Prairie Rd Suite 100 St. Paul, Oregon 97137-9722 Bus: 866-367-8900 FAX: 503-633-8901 Cell: 503-880-1947
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Art Hebert Sent: Wednesday, June 18, 2003 12:35 PM To: 'Jeff Burton'; toasters@mathworks.com Subject: RE: 144GB Fiber Channel drives in the field
What is the largest disks you can put in an FC9 shelf? I'd like to make the jump to 144gb drives from 36gb if possible.
Thanks
art
-----Original Message----- From: Jeff Burton [mailto:burtonj@pprd.abbott.com] Sent: Tuesday, February 04, 2003 7:47 AM To: toasters@mathworks.com Subject: Re: 144GB Fiber Channel drives in the field
Here are the results of the tests I ran to check the raid reconstruct speed of a node in an 880c with 72GB drives. The filer was not being used, so the only load was the reconstruct. This is just the base time to reconstruct. One can find a multiplier that can be used to determine the reconstruct speed with the avg load on their filers....although I don't know how easy that would be to find.
RAID set size Elapsed (time to reconstruct) 2 29:43.70 (29 min, 43.70 sec) 3 39:59.72 4 51:58.87 5 1:04:17.90 6 1:16:40.76 7 1:29:39.19 8 1:43:01.75 9 1:56:02.55 10 2:09:19.00 11 2:22:39.12
As you can see the time to reconstruct appears to go up by about 11-14 minutes when adding an additional disk. More info on tests: cluster 880 all disks were on the same shelf: DS14 all disks were zero'ed before the reconstruct raid.reconstruct.perf_impact medium raid.resync.perf_impact medium
procedure repeated for each raid size: -priv set diag -setflag raid_enable_prezeroing 1 after done all disks zero'ed -setflag raid_enable_prezeroing 0 -vol create vol1 -r 14 11 after volume is created -disk fail (parity disk in vol1) times taken -vol offline vol1 -vol destroy vol1 -disk unfail (disk failed)