Can anyone recommend a formula to use to determine "maxfiles" size? We are using mostly 64k file size html text docs. Thank you, Mike Ball
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com]On Behalf Of sirbruce@ix.netcom.com Sent: Saturday, June 26, 1999 9:53 AM To: toasters@mathworks.com Subject: Re: Increasing "maxfiles" and its effect on performance
On 06/25/99 16:40:20 you wrote:
What affect will increasing the "maxfiles" have on the read/write performance on a volume? We are currently running Data Ontap ver
5.2.2P1 on
a F740.
Very little, unless you set it to an extremely high value that is far, far more than you need. (Like, I dunno, 1 billion?) Even then I doubt it would be very noticeable unless the filer were heavily loaded. If you are running out of files on the filer (which is not unusual if the default setting was conservative and you've added a lot of space), you shouldn't think twice about upping the maxfiles value.
I would guess (one of the Netapp WAFL experts could correct me on this) that since maxfiles reserves some space in the filesystem for metdata, that slowly increasing maxfiles over time (rather than in large increases at once) might "fragment" the metadata file across the filesystem, thus hurting performance somewhat. But then I seem to recall that even when you increase maxfiles, this space is not necessarily allocated right away, so maybe it doesn't matter and they'll only grow in a fragmentary fashion anyway. (Fragmentation is not really much of a problem on Netapp's anyway and is only really noticeable on very large files.)
Bruce