here's the situation. i have an F330, currently with 20 4Gb discs in three shelves. it is not performance-bound, it's capacity-bound. i'd heard (from netapp) sometime back that, at some future date, mixed 4 and 9 arrays would be permitted, but that i couldn't remove existing 4s, and that 9s would require a dedicated shelf. therefore, i'd tried to keep my disc load on three shelves, and so far succeeded.
if i understand correctly, whilst i will shortly be allowed to mix 4s and 9s, i won't be allowed to use a full set of 9s - in fact, i'll be limited to 21*4 + 3*9 for data, with one hot spare and one parity, leaving me two disc slots i can't use.
is this in fact the case, and if so, would anyone care to try to explain why, *bearing in mind that this toaster is definitely not performance-limited*?
Tom Yates - Unix Chap - The Mathworks, Inc. - +1 (508) 647 7561 MAG#65061 DoD#0135 AMA#461546 1024/CFDFDE39 0C E7 46 60 BB 96 87 05 04 BD FB F8 BB 20 C1 8C "Microsoft (tm) is a single-sofa supplier"
if i understand correctly, whilst i will shortly be allowed to mix 4s and 9s, i won't be allowed to use a full set of 9s - in fact, i'll be limited to 21*4 + 3*9 for data, with one hot spare and one parity, leaving me two disc slots i can't use.
We have no capability in our software, at this time, to *remove* a disk drive from the array. The data on a 4-GB disk drive is mapped as such. You can add 9-GB disk drives to an array, and they will be used as 9-GB disk drives. But if they replace an existing 4-GB disk drive, they will be seen by the system as 4-GB of data. I agree it would be useful to be able to map the additional space on the replacement disk drive, but the software doesn't have that capability at this time.
What you can do, of course, is dump your data to tape (or some other location), rebuild your file system entirely with 9-GB disk drives, and then reload the data onto the newly-defined array. I know that this is very inconvenient, and I'm not suggesting that you should do it, I'm only pointing out that it is possible.
-- Andy
Andy Watson Director, Technical Marketing watson@netapp.com Network Appliance +1 408 367 3220 voice 2770 San Tomas Expressway +1 408 367 3151 fax Santa Clara, CA 95051 http://www.netapp.com/
"It's really stupid to be an intellectual when you're young. You should be an intellectual when you're a hundred years old and can't feel anything anymore." -- a character in Bruce Sterling's novel, HOLY FIRE
here's the situation. i have an F330, currently with 20 4Gb discs in three shelves. it is not performance-bound, it's capacity-bound. i'd heard (from netapp) sometime back that, at some future date, mixed 4 and 9 arrays would be permitted, but that i couldn't remove existing 4s, and that 9s would require a dedicated shelf. therefore, i'd tried to keep my disc load on three shelves, and so far succeeded.
if i understand correctly, whilst i will shortly be allowed to mix 4s and 9s, i won't be allowed to use a full set of 9s - in fact, i'll be limited to 21*4 + 3*9 for data, with one hot spare and one parity, leaving me two disc slots i can't use.
is this in fact the case, and if so, would anyone care to try to explain why, *bearing in mind that this toaster is definitely not performance-limited*?
Do you know if the mixed-shelf config has been announced? The person you spoke with may have just been speculating, rather than actually promising anything...
In any case, the reason is that the F330 is limited in the maximum filesystem size it can have (117GB). Not sure if the reasons are entirely technical or marketing driven, but clearly one can see the desire of a company to differentiate a product line by capacity, even if you could *theoretically* get more capacity. One could ask a similar question - why not 3 SCSI cards? The slots are there... but for whatever reason, Network Appliance has chosen not to offer/support that configuration on an F330. There are other customer considerations in general that also might not relate to your specific case, such as the amount of time to do a backup and restore, reconstruction time, etc.
I have the same sort of issue at times... there are cases where I need a lot of online storage, but I don't really need a 500 MHz alpha CPU to handle the load. This type of storage may almost be archival in nature, and thus perhaps a nearline or HSM solution would be more appropriate for the data.
Bruce
sorry if this looks like flogging a dead horse; i've been a bit busy and this has been sitting in a composition window for two weeks. also, i'm pretty annoyed about this, and i didn't want to be writing email when i was flat-out livid.
On Thu, 22 May 1997, Bruce Sterling Woodcock wrote:
Do you know if the mixed-shelf config has been announced? The person you spoke with may have just been speculating, rather than actually promising anything...
i asked what i could refer to in open media before i got on my high horse. i was asked not to mention that ONTAP version or the projected release date, but was told that it was OK to mention the existence of the feature.
i can see that keeping RAID reconstruct time low is a win, but arbitrary lines drawn in the sand do tend to look like marketing somewhat more than they look like engineering. particularly, i'd been told some time back that i'd be able to use 9s provided i kept a shelf free for them, which i've done; now i find i can only half-fill that shelf. i can buy another shelf of 4s, but this is essentially lost investment if i migrate up to a bigger system, whereas 9s aren't (if i understand correctly).
i don't feel that i'm trying to go beyond the edge, here; i'm not trying to fit a fifth shelf of discs, or go FWDifferential and use more than seven in a shelf. i'm observing that i have a toaster which is capacity-limited (not performance-limited), that i can fit a fourth shelf to it, that it can take 9Gb discs, and then asking why i can't fill the shelf up, given that my RAID reconstruct times are currently in the 20-minute range. that's all.
I have the same sort of issue at times... there are cases where I need a lot of online storage, but I don't really need a 500 MHz alpha CPU to handle the load. This type of storage may almost be archival in nature, and thus perhaps a nearline or HSM solution would be more appropriate for the data.
possibly; however, the question isn't "what else could i buy to do this", but "how can i get the best value out of my toasters".
my perception of netapp has not, historically, been of a company who will dictate what i'll buy and how i'll use it, but i'm open to being told that it should be otherwise.
Tom Yates - Unix Chap - The Mathworks, Inc. - +1 (508) 647 7561 MAG#65061 DoD#0135 AMA#461546 1024/CFDFDE39 0C E7 46 60 BB 96 87 05 04 BD FB F8 BB 20 C1 8C "Microsoft (tm) is a single-sofa supplier"
i asked what i could refer to in open media before i got on my high horse. i was asked not to mention that ONTAP version or the projected release date, but was told that it was OK to mention the existence of the feature.
Sounds like typical marketing/sales spin. :) Now, I'm not trying to say you were lied to or mislead. My only point was that capability doesn't exist right now, and might not exist for some time, so you might not want to wait for it. Then again, it might come out tomorrow... even if I knew everything that was on the agenda, I probably couldn't tell you. :)
i can see that keeping RAID reconstruct time low is a win, but arbitrary lines drawn in the sand do tend to look like marketing somewhat more than they look like engineering. particularly, i'd been told some time back that i'd be able to use 9s provided i kept a shelf free for them, which i've done; now i find i can only half-fill that shelf. i can buy another shelf of 4s, but this is essentially lost investment if i migrate up to a bigger system, whereas 9s aren't (if i understand correctly).
I think some people were even told that they could put 9s in the same shelf. Both of these were probably genuine statements of direction at the time, but things do change. Those other people lost their investment as well... they took a calculated risk, right?
[...] the question isn't "what else could i buy to do this", but "how can i get the best value out of my toasters".
my perception of netapp has not, historically, been of a company who will dictate what i'll buy and how i'll use it, but i'm open to being told that it should be otherwise.
Well, I don't want to get into the tricky position of defending Netapp. Nor criticizing them. :) But from what I can see currently, what you want won't work. Netapp doesn't offer that much value in your platform... you would have to upgrade to a newer model. Now, as far as your personal dissatisfaction, I would suggest bringing it up with your sales rep... they might be able to work some deal so you can recoup some of your investment.
Bruce
+--- In our lifetime, Tom Yates madhatta@mathworks.com wrote: | | if i understand correctly, whilst i will shortly be allowed to mix 4s and | 9s, i won't be allowed to use a full set of 9s - in fact, i'll be limited | to 21*4 + 3*9 for data, with one hot spare and one parity, leaving me two | disc slots i can't use.
I believe this is due to file system limitations; the 220 was limited to 50G, the 330 to 100G. Only the 540's were to be able to raise the file system limit by using 9G drives. The upgrade proc should be like going from the 2G to 4G drives (fail the parity, rebuild on the 1st new large drive).
This could be a technical limitation (doubtful) or a marketing imposed one (very probable). What better incentive to get you to buy (or upgrade) your existing filer to a shinny new one than being out of space? (not that there is anything wrong with that).
So, rather than spanding $15k or so (this is a pure speculation as to what NetApp certified 9GB disks may cost) on 3 drives and only getting 9GB worth of space, you should probably just fill the rest out with 4G drives.
Perhaps there may be a better explanation for this.
Alexei
| if i understand correctly, whilst i will shortly be allowed to mix 4s and | 9s, i won't be allowed to use a full set of 9s - in fact, i'll be limited | to 21*4 + 3*9 for data, with one hot spare and one parity, leaving me two | disc slots i can't use.
I believe this is due to file system limitations; the 220 was limited to 50G, the 330 to 100G. Only the 540's were to be able to raise the file system limit by using 9G drives.
Unless I'm sorely mistaken, the F540 is similarly limited to 200GB.
This could be a technical limitation (doubtful) or a marketing imposed one (very probable).
I won't argue against there being a marketing component to our logic, but there really is at least some technical reasoning -- the longer it takes to reconstruct after a drive failure, the greater the exposure there is to a multiple drive failure and consequent loss of data. We therefore make sure that a given filer has sufficient CPU performance to reconstruct a fully-configured array in no more than a target amount of time. (I think the target is 8 hours but it's late and I'm picking that from non-parity neurons. ;-))
For an F330, 100GB is the limit beyond which the window of exposure to multiple drive failures becomes unacceptably long. Therefore, even though Tom's F330 is not performance-bound *in normal operation* we won't support >100GB on it.
Perhaps there may be a better explanation for this.
I hope my version is at least a little better.
-- Karl
+--- In our lifetime, kls@netapp.com (Karl Swartz) wrote: | | > I believe this is due to file system limitations; the 220 was limited | > to 50G, the 330 to 100G. Only the 540's were to be able to raise the | > file system limit by using 9G drives. | | Unless I'm sorely mistaken, the F540 is similarly limited to 200GB.
When we purchased the initial f540's, we were told the opposite. The story was, once the 9GB drives were available for end users, f540 owners would be able to double their file system.
Of course, this was way before the 630 was ever talked about.
I need distributed writes more than anything else right now. 200GB is plenty of for now :)
Actually, where does the f540 fall now? From looking at the docs on the web http://www.netapp.com/products/level3/netappfilers.html, it looks like it falls between (the cracks) the 520 and 630.
The 520 can do 28 drives max whereas the 630 can do 52 drives.
Don't my f540's do 52 drives as well?
| to reconstruct a fully-configured array in no more than a target | amount of time. (I think the target is 8 hours but it's late and I'm | picking that from non-parity neurons. ;-))
Add to that "with acceptable performance degradation" and you can put it on the product literature... :)
I had forgotten how long it can take a loaded filer to rebuild a disk.
Thanks,
Alexei