Hi there
We have an A300 connected to two FC fabrics with 2 x 8G FC per controller node.
We are doing tests on an “older” HP C7000 with VirtualConnect modules and with blades that have 2 x 8G FC mezzanine.
The VCs are connected into the fabric with two 8G FC per fabric.
So the throughput to a given LUN would be 16Gbps. As the other two links are standby.
The blade is running ESXi 7.0 and on top of this we are running a virtual Windows 2016 server from where the tests are done.
There are no one else on the A300. We hav created a standard LUN in a volume, presented it to ESXi from where we created a VMFS and the VM is then presented with a VMDK-based disk.
It seems that no matter what we adjust we have a limit at about 70.000 IOPs on the disks.
We are able to run other workloads while we are testing with 70.000 IOPs, and we are able to load the system even more, so the NetApp does not seem to be the limiting factor… we can also not see that much load on the system from the NetApp Harvest/Grafana we have running…
We of cause make sure that the tests we do is not done in cache on the host…
The specific test is with 64k block size.
Questions:
* Can there be a limit of IOPs in the C7000 setup? PCI-bridge stuff?
* Does it make sense to add more paths, in either the NetApp and/or the C7000? (each blade can only have two ports)
* Would it make sense to upgrade the FC links to 16G?
* We have adjusted the queue depth as per NetApp’s recommendations, as far as I know there is no way to adjust QD on the ONTAP node itself, is this correct?
* The shelf attached is IOM6 based, would it make sense to upgrade this to IOM12?
* Would it make sense to cable the IOM12 in a Quad-Path setup?
Any suggestions are welcome… also if you have any insights of how many IOPs one should expect from an A300 with 24 x 3.7T SSDs.
/Heino
--