Hi,
I'm thinking to implement quotas on the filer, but the one thing which concerns me is NFS performance. Can any1 try to quantify quota system's degradation effect on NFS performance? Please elaborate as much as you can.
Thanks in advance.
Igor
----- Original Message ----- From: "Igor Schein" igor@txc.com To: toasters@mathworks.com Sent: Thursday, November 30, 2000 4:07 PM Subject: quota overhead
Hi,
I'm thinking to implement quotas on the filer, but the one thing which concerns me is NFS performance. Can any1 try to quantify quota system's degradation effect on NFS performance? Please elaborate as much as you can.
The impact of performance is very minimal; less than 10%. Just turn on the quotas and see for yourself; if you don't like the impact, you can always turn them back off.
There is a bug, 17233, which says:
The filer uses a quota hash table with 1024 elements. On a filer with many quotas, the code which looks up quotas needs to scan long lists of quotas, which can result in heavy CPU usage, particularly while resizing quotas, but also during ordinary file system operations and while handling rquota requests on behalf of client quota commands.
I don't see any indication that this bug has been fixed or that more elements have been added in more recent releases such as version 6.0.
Bruce
Igor Schein igor@txc.com wrote:
I'm thinking to implement quotas on the filer, but the one thing which concerns me is NFS performance. Can any1 try to quantify quota system's degradation effect on NFS performance? Please elaborate as much as you can.
and sirbruce@ix.netcom.com (Bruce Sterling Woodcock) replied: < < The impact of performance is very minimal; less than 10%. Just < turn on the quotas and see for yourself; if you don't like the impact, < you can always turn them back off.
That's good advice. In particular, you can set up a set of quotas something like those you were considering using, but with no limits actually biting, e.g.
# Monitor usage of all quota trees on the root volume * tree - - # and on the other volume(s) * tree@/vol/trunk - - * tree@/vol/branch - - * tree@/vol/twig - - # and monitor usage by uid in the /homes quota tree * user@/homes - -
and then go "quota on [volume]". This will give you all the performance overhead without having to work out what the limits should actually be. [And of course, the results of "quota report" will then come in handy for working out those limits if you decide to go ahead!]
< There is a bug, 17233, which says: < < The filer uses a quota hash table with 1024 elements. On a filer < with many quotas, the code which looks up quotas needs to scan < long lists of quotas, which can result in heavy CPU usage, < particularly while resizing quotas, but also during ordinary file < system operations and while handling rquota requests on behalf < of client quota commands. < < I don't see any indication that this bug has been fixed or that < more elements have been added in more recent releases such < as version 6.0.
I don't think you'll see this sort of effect unless you have massive numbers of entries in the quota database. I certainly don't with >5K entries on the F740 I look after, nor do I hear of our mail group doing so with >25K entries on theirs.
Eirik Fuller is the expert on this sort of thing, and I and others have drawn his attention to this thread, but I gather that he is currently performing heroic surgery to a customer's filing system and is Not To Be Disturbed, or the scalpel may slip...
Apparently NetApp have generated systems with enlarged quota hash tables on a one-off basis for customers with extra-ordinary requirements.
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.