Hi Randy
We tend to split the disks between controllers and allocate all the disks (minus spares) to the aggregates on setup. I don't like to WAFL iron / reallocate if I can avoid it! I also don't like to put all the load onto a single controller.
As others have said, keep an eye on performance and move volumes around as needed to balance space and IOPS.
For your Xen NFS storage repositories, I would use several volumes. You might lose some de-dupe efficiency but you'll gain a lot of flexibility. Create a separate LIF for each volume and home the LIF with the controller that has the data. If you need to move one of the volumes to the other controller's aggregate in the future, you can move and re-home the LIF to keep traffic off the cluster network without affecting the traffic to the other volumes.
Good luck Steve
On Mon, 29 Jun 2020 at 18:13, Rue, Randy randyrue@gmail.com wrote:
Hello All,
We're running a pair of FAS8020's with Flash Cache modules, SAS 600GB disks and DOT9.1P6 to serve two SVM's, one for an NFSv3 storage resource for our Xen VM farm and one for NFS/CIFS file storage for our users.
We currently run one SVM on one node and one on the other to avoid any impact from mixing IO loads on the cache modules. We also have two aggregates, one for the VMs and one for the file server.
We're getting tight on space and also getting tired of needing to withhold unallocated disks and hand them out as needed to each aggregate. Right now we have too much free space on one aggregate and not enough on the other. We're leaning toward using the system like an enterprise storage cluster is meant to be and migrating both to a single aggregate.
On the other hand, we've been bitten before when mixing different performance loads on the same gear, admittedly that was for a larger pool of very different loads on SATA disks in a SAN behind a V-Series filer.
We're not much worried about cache poisoning as long as we keep each SVM on each node. Our bigger concern is mixing loads at the actual disk IO level. Does anyone have any guidance? The total cluster is 144 600GB SAS 10K RPM disks and we'd be mixing Xen VMs and NFS/CIFS file services that tend to run heavy on "get attr" reads.
Let us know your thoughts and if you need any more information,
Randy in Seattle
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters