"Heino" == Heino Walther hw@beardmann.dk writes:
Heino> We have a system that needs 100k+ IOPS but for some strange Heino> reason we cannot get past about 70k IOPS.
I wish I had these types of problems!
Heino> The server is Windows VM on ESXi with two FC HBAs running 8G. Heino> (we have adjusted queue depth etc.)
What kind of ESXi hardware are you running on?
Heino> The server is connected to the NetApp via two Brocade 6510 switches.
Are the links to the A300/700 also 8gb?
Heino> The clusters we have tested up against are an A300 with 24 x Heino> 7TB SSD. And an A700 with a mixed setup of shelfs (two disk Heino> loops with quad cabling). But even up against the A700 we Heino> cannot seem to get past the 70k IOPS.
Heino> Now we have read about the concurrency on a NetApp system, Heino> where each volume is assigned a CPU core, which might be our Heino> problem..
Heino> https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP...
Heino> Our storage setup is a volume with a LUN which is presented to Heino> the host, which then creates a VMFS as a datastore…
Why not just export an NFS volumes instead? Then you get the advantage of being able to grow/shrink your datastore(s) as needed.
And could you export multiple disks from multiple data stores to windows and then use RAID0 to join them into one big volume for your application?
Heino> We have tried to create two LUNs on separate volumes, and we Heino> can pull out 70k on each, at the same time… which seems to Heino> point towards the concurrency…
Heino> But is there a way around this? We would rather not have Heino> multiple LUNs, but rather one LUN.
You're way out of my league performance wise... *grin* so I don't have anything useful to give in terms of tuning, I'm just wondering if you can turn this problem on it's head somehow.
And honestly, if you need this many IOPS, would it make sense to look more closely at ways to break it up, or to maybe use RAM disks and flush larger blocks to the Netapp?
Good luck! John