If you have a raid group of 18GB drives, and add a 36GB drive, the 36GB drive becomes the new parity drive. The old 18GB parity drive becomes a data drive, giving you 18GB of space. After that, all 36GB drives you add give you the full 36GB of space.
The "problem" with this is that you end up with hot spots - with (for example) 4x18GB drives and 3x36GB drives in a single raid group, you'd end up with some parts of the volume where the stripe width is narrower, potentially limiting performance. The "lower half" of the filesystem would be striped across all 6 data drives (the 4 18's and 2 36's) but writes to the "upper half" would only touch the 2 36GB data drives.
FWIW, we (hi, Darrell!) helped discover this about 5 years ago on an F330 that was grown from 2GB drives to a mix of 2GB and 4GB drives. (We beat the snot out of that box; it averaged over 700 NFS ops/sec 24x7 for over two years...) Most of the data on the filesystem before the addition of the larger capacity drives was fairly static, and that exascerbated the problem quite a bit, since all new writes were spread over fewer drives. Empirical studies (i.e., watching the blinky lights!) highlighted the problem quite clearly. Perhaps backing up and restoring the volume would help that a bit, but what a pain...
So that's my recollection of why Netapp recommends against mixing drive sizes within a raid group. It's possible, but it may have a performance impact.
I'm sure someone will leap in and correct me if I'm wrong. :-)
Ta,
-- Chris
-- Chris Lamb, Unix Guy MeasureCast, Inc. 503-241-1469 x247 skeezics@measurecast.com