On 2020-06-29 19:10, Randy Rue wrote:
Hello All, We're running a pair of FAS8020's with Flash Cache modules, SAS 600GB disks and DOT9.1P6 to serve two SVM's, one for an NFSv3 storage resource for our Xen VM farm and one for NFS/CIFS file storage for our users.
cDOT 9.1P6 is a bit old. You really should upgrade, IMHO. Only good things can come out of that exercise for you
With a FAS8020 there is no good reason to stick with 2 different Aggrs in the way you describe in your post. Really not. There's no slow 7.2K rpm disks here, only (small) 10K rpm (600G) and quite a lot of them too 144/2 = 72(?)
You can just as well have a single Aggr in each node. The risk that this would hurt you in any way, would be some anomaly that makes WAFL "stuck" in a single thread in the so called Aggregate Affinity. But it's not likely to happen (really it's a sort of WAFL bug in that case) and the risk is smaller and smaller the higher ONTAP version you're on. The best is, honestly, 9.7 now... it has some really cool fixes and perf tuning inside WAFL that has never been there before. Trust me, it's been a long journey (one which I've been on since 2014 sort of...)
I recommend you upgrade to at least 9.6Px now. Unless I'm missing some caveat or show stopper.
I suppose you could go for FlexGroups, but... it won't give you that much benefit I think. Not for these workloads you describe. And FAS8020 isn't all that powerful anyway, so...
Hope this helps, /M
Randy Rue wrote:
We currently run one SVM on one node and one on the other to avoid any impact from mixing IO loads on the cache modules. We also have two aggregates, one for the VMs and one for the file server.
We're getting tight on space and also getting tired of needing to withhold unallocated disks and hand them out as needed to each aggregate. Right now we have too much free space on one aggregate and not enough on the other. We're leaning toward using the system like an enterprise storage cluster is meant to be and migrating both to a single aggregate.
On the other hand, we've been bitten before when mixing different performance loads on the same gear, admittedly that was for a larger pool of very different loads on SATA disks in a SAN behind a V-Series filer.
We're not much worried about cache poisoning as long as we keep each SVM on each node. Our bigger concern is mixing loads at the actual disk IO level. Does anyone have any guidance? The total cluster is 144 600GB SAS 10K RPM disks and we'd be mixing Xen VMs and NFS/CIFS file services that tend to run heavy on "get attr" reads.
Let us know your thoughts and if you need any more information,
Randy in Seattle