Randy -
Have you heard of FlexGroups? They’re in a later rev of ONTAP and allow you to collapse disparate volumes on different aggregates into a single large volume — you can even spread the workloads across controllers (so for instance create 2 67x drive aggrs on each on its own controller then create a flexgroup out of 2x volumes on each controller into a single usable namespace per application (say one for your CIFS/SMB and one for your NFSv3 use cases). The only difficulty would lie in obtaining swing storage to move the existing vols over and reconfigure the current storage — you can migrate the vols with the ‘vol move’ command from one aggregate to another, etc.
Something to think about anyway, and it does allow for more flexibility with how you use the disk.
Here’s a link to the TR on FlexGroups: https://www.netapp.com/us/media/tr-4571.pdf
I would suggest upgrading to the latest OS rev. 9.7P5 so you have access to qtrees, quotas and so on with FlexGroups as these features were added incrementally to the FlexGroups toolkit.
Anthony Bar
On Jun 29, 2020, at 10:11 AM, Rue, Randy randyrue@gmail.com wrote:
Hello All,
We're running a pair of FAS8020's with Flash Cache modules, SAS 600GB disks and DOT9.1P6 to serve two SVM's, one for an NFSv3 storage resource for our Xen VM farm and one for NFS/CIFS file storage for our users.
We currently run one SVM on one node and one on the other to avoid any impact from mixing IO loads on the cache modules. We also have two aggregates, one for the VMs and one for the file server.
We're getting tight on space and also getting tired of needing to withhold unallocated disks and hand them out as needed to each aggregate. Right now we have too much free space on one aggregate and not enough on the other. We're leaning toward using the system like an enterprise storage cluster is meant to be and migrating both to a single aggregate.
On the other hand, we've been bitten before when mixing different performance loads on the same gear, admittedly that was for a larger pool of very different loads on SATA disks in a SAN behind a V-Series filer.
We're not much worried about cache poisoning as long as we keep each SVM on each node. Our bigger concern is mixing loads at the actual disk IO level. Does anyone have any guidance? The total cluster is 144 600GB SAS 10K RPM disks and we'd be mixing Xen VMs and NFS/CIFS file services that tend to run heavy on "get attr" reads.
Let us know your thoughts and if you need any more information,
Randy in Seattle
_______________________________________________ Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters