Many of you have mentioned software disk ownership, which I've seen on 2050Cs but I didn't realise it was doable on the 3040C, so thanks for that.
So, following on from that, I've just realised that if we add in the entire shelf in, we'll hit the 16t aggregate limit. So now I know I can just add a few disks in, which is great, but what happens with raid group sizing? As I understand it, if I add in less disks that the raid group size (13 in this case), then I'll create a hot spot across those disks, which is less than ideal.
A slight aside, but I'm finding Netapp's recommendations on raid group size a bit confusing. We have shelves of 14 disks, but netapp recommend raid group sizes of 14 or 8 depending on the size of disk. Both options mean that when you take spare disks into account, you spill over the shelf, meaning that you're bound to have a few stragglers at the end. Which is why we've set raid group size to 13 - nice full shelf, with a spare on the end. Am I missing something?
Peta
Sounds reasonable to me. However, there isn't much point in going to a lot of effort to confine a raid group so a single shelf. Performance is actually a little better if you stripe across shelves. Furthermore, as soon as you have a disk failure, the filer will pick a hot spare somewhat arbitrarily and the spare may be on a different shelf than the failed disk.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support