Netapp is rather silent on how....challenged it is towards deleting large amounts of files, total block, or both, depending what version you are on.
And depending what version you are on, you have multiple ways to manage it, or not.
This would be a good support call, to understand what you can do, or not.
What you are probably seeing is something like this:
https://www.flickr.com/photos/28804666@N08/shares/t9s941A funner example here:
https://www.flickr.com/photos/28804666@N08/shares/x32YM1A bump in read -and- write latency, which is quite odd, as you dont see much more throughput that you did before, maybe the client(s) did a lookup storm to go find things to delete as well. In this examples, yes, throughput for the cluster went up, but its actually capable of ~4GB/sec, so I know in my environment 1.4 is scratch.
But what happened under the covers in our release (9.1xx) is that background delete workload clogs up the CP process, and it chokes the whole box, and you see B2B CPs as a result. There are some dials and bootargs to remediate this, and since then I can wipe out 16-20TB at once with no impact.
What we see via some dials and bootargs for our code on a SATA HA pair now looks like this. We delete huge amounts of hbase data every night. So its tight.
https://www.flickr.com/photos/28804666@N08/shares/E0fz56