I wonder if (like in Unix), as the number of files left drops, the rate of deletion will increase or go faster.  I found in one case that the time was being taken in reading then re-writing the filesystem directory index of files.  As the number dropped from 100,000+ files the rate of deletes increased.

On Fri, Sep 25, 2015 at 7:19 AM, <basilberntsen@gmail.com> wrote:
Pretty sure the only shortcut‎ is volume operations like qtree snapmirror. 

Sent from my BlackBerry 10 smartphone on the Bell network.
From: Edward Rolison
Sent: Friday, September 25, 2015 8:48 AM
To: Basil B
Subject: Re: File delete rate

Bonus question would be - does that also apply if you do it via a "qtree delete -f" or is the API delete run at about the same speed? 

On 25 September 2015 at 13:31, <basilberntsen@gmail.com> wrote:
Yes, that seems possible. Operations working on large numbers of small files always take a long time.

Sent from my BlackBerry 10 smartphone on the Bell network.
From: Edward Rolison
Sent: Friday, September 25, 2015 8:00 AM
Subject: File delete rate

Apologies if this seems a bit vague - I'm in need of a quick sanity check. 
Doing a qtree delete - it's taking 'a while'. 
This seems to be because it's deleting every file at a rate of 50-100 files per second, and there's about a million files in the qtree.

Is that a 'reasonable' rate of deletion? (This is using an API based 'file-delete-file' operation, if that's relevant). 





_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters