"Randy" == Randy Rue randyrue@gmail.com writes:
Randy> We're running a pair of FAS8020's with Flash Cache modules, SAS Randy> 600GB disks and DOT9.1P6 to serve two SVM's, one for an NFSv3 Randy> storage resource for our Xen VM farm and one for NFS/CIFS file Randy> storage for our users.
How many volumes are you running in each SVM? But basically in my experience it's not too big a deal to have different SVMs sharing aggregates. This bigger deal is having SATA vs SAS on the same head, at least with older versions. I'm running 9.3 these days and not seeing problems.
Randy> We currently run one SVM on one node and one on the other to Randy> avoid any impact from mixing IO loads on the cache modules. We Randy> also have two aggregates, one for the VMs and one for the file Randy> server.
Are you running OnCommand performance manager and watching your loads and metrics? Personally I don't think you'll run into any problems serving data from either SVM on either head or either aggregate.
Randy> We're getting tight on space and also getting tired of needing Randy> to withhold unallocated disks and hand them out as needed to Randy> each aggregate. Right now we have too much free space on one Randy> aggregate and not enough on the other. We're leaning toward Randy> using the system like an enterprise storage cluster is meant to Randy> be and migrating both to a single aggregate.
I'd just move some vols from the full aggregate to the emptier one. Using performance advisor you can look for your lower performing, but higher disk usage vol(s) to move over and balance the load.
Randy> On the other hand, we've been bitten before when mixing Randy> different performance loads on the same gear, admittedly that Randy> was for a larger pool of very different loads on SATA disks in Randy> a SAN behind a V-Series filer.
That is a different situation, since the Netapp V-Filer can't really control the backend nearly as well.
Randy> We're not much worried about cache poisoning as long as we keep Randy> each SVM on each node. Our bigger concern is mixing loads at Randy> the actual disk IO level. Does anyone have any guidance? The Randy> total cluster is 144 600GB SAS 10K RPM disks and we'd be mixing Randy> Xen VMs and NFS/CIFS file services that tend to run heavy on Randy> "get attr" reads.
Randy> Let us know your thoughts and if you need any more information,
How many volumes and how many files per-volume do you have? I would expect the Xen VMs to not have nearly as much load, and that the getattr() calls would be heavily dependent on how big and busy those volumes are.
What sort of load do you see on your 8020 pair? Can you post some output of 'statistic aggregate show' and 'statistic volume show'? Basically, just get some data before you start moving stuff around.
John