Hi there
We have an A300 connected to two FC fabrics with 2 x 8G FC per controller node.
We are doing tests on an “older” HP C7000 with VirtualConnect modules and with blades that have 2 x 8G FC mezzanine.
The VCs are connected into the fabric with two 8G FC per fabric.
So the throughput to a given LUN would be 16Gbps. As the other two links are standby.
The blade is running ESXi 7.0 and on top of this we are running a virtual Windows 2016 server from where the tests are done.
There are no one else on the A300. We hav created a standard LUN in a volume, presented it to ESXi from where we created a VMFS and the VM is then presented with a VMDK-based disk.
It seems that no matter what we adjust we have a limit at about 70.000 IOPs on the disks.
We are able to run other workloads while we are testing with 70.000 IOPs, and we are able to load the system even more, so the NetApp does not seem to be the limiting factor… we can also not see that much load on the
system from the NetApp Harvest/Grafana we have running…
We of cause make sure that the tests we do is not done in cache on the host…
The specific test is with 64k block size.
Questions:
Any suggestions are welcome… also if you have any insights of how many IOPs one should expect from an A300 with 24 x 3.7T SSDs.
/Heino
--