Hi,
We normally build our clusters as per (what I believe is) best practice, with equal number of identical shelves on each loop. However, we've hit an emergency situation where a currently unused (and not in the aggr) shelf on one head would really really really be much more useful on the other, to grow a very full volume on there.
Due to the nature of the problem, just presenting a new volume from this shelf is not a good enough fix; I need to grow the existing volume. Asap.
So how bad of an idea would it be to wire that shelf into the other head? Will the cluster complain? The cluster in question is a 3040C, and we'll end up with 3 sata shelves on one head and 1 on the other.
Peta
As 3040C should use software base disk ownership anyway, there is no need to change cabling at all. Just remove disk ownership from spares on one head and add them to another head.
If you have single path connection this will add you additional benefit of distributing IO over two controllers :)
С уважением / With best regards / Mit freundlichen Grüβen
--- Andrey Borzenkov Senior system engineer
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Peta Spies Sent: Monday, April 21, 2008 2:25 PM To: toasters@mathworks.com Subject: Balanced clusters?
Hi,
We normally build our clusters as per (what I believe is) best practice, with equal number of identical shelves on each loop. However, we've hit an emergency situation where a currently unused (and not in the aggr) shelf on one head would really really really be much more useful on the other, to grow a very full volume on there.
Due to the nature of the problem, just presenting a new volume from this shelf is not a good enough fix; I need to grow the existing volume. Asap.
So how bad of an idea would it be to wire that shelf into the other head? Will the cluster complain? The cluster in question is a 3040C, and we'll end up with 3 sata shelves on one head and 1 on the other.
Peta
Many of you have mentioned software disk ownership, which I've seen on 2050Cs but I didn't realise it was doable on the 3040C, so thanks for that.
So, following on from that, I've just realised that if we add in the entire shelf in, we'll hit the 16t aggregate limit. So now I know I can just add a few disks in, which is great, but what happens with raid group sizing? As I understand it, if I add in less disks that the raid group size (13 in this case), then I'll create a hot spot across those disks, which is less than ideal.
A slight aside, but I'm finding Netapp's recommendations on raid group size a bit confusing. We have shelves of 14 disks, but netapp recommend raid group sizes of 14 or 8 depending on the size of disk. Both options mean that when you take spare disks into account, you spill over the shelf, meaning that you're bound to have a few stragglers at the end. Which is why we've set raid group size to 13 - nice full shelf, with a spare on the end. Am I missing something?
Peta
On 21/04/2008, Borzenkov, Andrey Andrey.Borzenkov@fujitsu-siemens.com wrote:
As 3040C should use software base disk ownership anyway, there is no need to change cabling at all. Just remove disk ownership from spares on one head and add them to another head.
If you have single path connection this will add you additional benefit of distributing IO over two controllers :)
С уважением / With best regards / Mit freundlichen Grüβen
Andrey Borzenkov Senior system engineer
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Peta Spies Sent: Monday, April 21, 2008 2:25 PM To: toasters@mathworks.com Subject: Balanced clusters?
Hi,
We normally build our clusters as per (what I believe is) best practice, with equal number of identical shelves on each loop. However, we've hit an emergency situation where a currently unused (and not in the aggr) shelf on one head would really really really be much more useful on the other, to grow a very full volume on there.
Due to the nature of the problem, just presenting a new volume from this shelf is not a good enough fix; I need to grow the existing volume. Asap.
So how bad of an idea would it be to wire that shelf into the other head? Will the cluster complain? The cluster in question is a 3040C, and we'll end up with 3 sata shelves on one head and 1 on the other.
Peta
Many of you have mentioned software disk ownership, which I've seen on 2050Cs but I didn't realise it was doable on the 3040C, so thanks for that.
So, following on from that, I've just realised that if we add in the entire shelf in, we'll hit the 16t aggregate limit. So now I know I can just add a few disks in, which is great, but what happens with raid group sizing? As I understand it, if I add in less disks that the raid group size (13 in this case), then I'll create a hot spot across those disks, which is less than ideal.
A slight aside, but I'm finding Netapp's recommendations on raid group size a bit confusing. We have shelves of 14 disks, but netapp recommend raid group sizes of 14 or 8 depending on the size of disk. Both options mean that when you take spare disks into account, you spill over the shelf, meaning that you're bound to have a few stragglers at the end. Which is why we've set raid group size to 13 - nice full shelf, with a spare on the end. Am I missing something?
Peta
Sounds reasonable to me. However, there isn't much point in going to a lot of effort to confine a raid group so a single shelf. Performance is actually a little better if you stripe across shelves. Furthermore, as soon as you have a disk failure, the filer will pick a hot spare somewhat arbitrarily and the spare may be on a different shelf than the failed disk.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support