You can do a few things. Here is one:
Place 5 shelves of SAS 10K on one pair. (node 1/2)
Place 5 shelves of SAS 7.2K on the other pair. (node 3/4)

If you want to "evenly" distribute disks, use the software ownership.
Disable disk auto-assignment
Assign all even disks (0,2,4...22) to node 2 or 4 (depends if it is pair one or pair 2)
Assign all odd disks (1,3,5..23) to node 1 or 3. 
Reserve sufficient spares.
Create aggregates on each node. With the disk assignment above, all aggregates could be created the same. (same # of disks, raid groups, etc)

When disks fail, you will have to assign disks as the auto-assign will be disabled and there will be more than one head with disk ownership per shelf.

You should be able to get similar performance on each node. Try to use as many SAS adapters as you can.

--tmac

Tim McCarthy
Principal Consultant



On Thu, May 7, 2015 at 9:50 AM, Basil <basilberntsen@gmail.com> wrote:
Hi folks,

If I have a 4 node cluster (running 8.2 CDOT) and I want to install 5 shelves of 10k SAS and 5 shelves of 7.2k drives, what would you recommend I do regarding aggregates? Normally on a 2 node cluster, we'd put all the SAS on one node and all the SATA on the other one. In this case, should we ensure that each node has an aggregate and attempt to avoid using the cluster network as much as possible?

The work being done is file shares and NDMP backups, both of which tend to run out of CPU and memory before they redline the backend disks. We want to use the processing of all four nodes, so that means we'll be spreading the LIFs across them all.

Thanks!

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters