From: Herbert Poetzl (herbert_at_13thfloor.at)
Date: Thu 21 Oct 2004 - 22:07:23 BST
On Thu, Oct 21, 2004 at 04:11:22PM -0400, Gregory (Grisha) Trubetskoy wrote:
> As promised, here are my vsched findings. My set up is
> util-vserver 0.30.195 and vs 1.9.3.
thanks! I'll try to comment where appropriate ...
> The token-bucket scheduler principle is pretty well explained here:
> vsched takes the following arguments:
> The number of tokens that will be placed in the bucket.
> How often (the above specified) number of tokens will be placed.
> This is in jiffies. Through some googleing I've found references
> that a jiffy is about 10ms, but it seems to me it's less than
> that. Not sure if the CPU speed has bearing on it. (Anyone know?)
> The bucket starts out with this many tokens. Tokens_max takes
> precedence here, so it cannot be higher than tokens_max.
> When a bucket is empty, the context is on hold _until_ at least
> this many tokens are in the bucket.
> The size of the bucket. When tokens aren't being used, the bucket
> will be getting fuller and fuller, but up to this value. So in effect
> this is your CPU burst parameter.
> This is obsolete, but I've found the current vsched is a little
> picky and will segfault if you omit parameters, so I always
> specified 0 here.
> According to the VServer paper, "At each timer tick, a running process
> consumes exactly one token from the bucket". Here running means actually
> needing the CPU as opposed to "running" as in "existing". Most processes
> are not running most of the time, e.g. an httpd waiting on a socket isn't
> running, even though ps would list it.
processes can have various states, R (runnable), S (sleeping)
T,Z,D ... (see man ps(1)) and processes in 'R' state can be
scheduled (running) or not scheduled (waiting to be run)
those which are scheduled (i.e. running on a cpu) will consume
one token for every tick ...
> A token is quite a bit of CPU time (again I'm not sure if this is CPU
> speed dependent, my tests were on a 2.8GHz Xeon). Typing "python" on the
> command line (which is a huge operation IMHO) consumes 17 tokens in my
> tests. Having 100000 tokens in your bucket is probably sufficient for a
> medium size compile job.
the ticks (or jiffies for now) are generated at a (usually)
constant intervall called HZ which was 100 for 2.4 and
typically is 1000 for 2.6 so you can assume to get a tick
every 1ms (or 1000 ticks each second)
> Here are some guidelines. All this is very much unscientific and without a
> lot of testing and theory behind, so if someone has better quigelines,
> please pitch in.
> When trying to come up with a good setting in my environment (basically
> hosting), I was looking for values that would not cripple the snappiness
> of the server, but prevent people from being stupid (e.g. cat /dev/zero |
> bzip2 | bzip2 | bzip2 > /dev/null).
> The fill interval should be short enough to not be noticeable, so
> something like 100 jiffies. The fill rate should be relatively small,
> something like 30 tokens. Tokens_min seems like it should simply equal to
> the fill rate. The tokens_max should be generous so that people can do
> short cpu-intensive things when the need them, so something like 10000
> You can see current token stats by looking at
> on the mother server. (If fill_rate is 115 no matter what you do, see my
> vsched posting earlier in the list).
> You can also use vsched to pace any cpu intensive command, e.g.:
> vcontext --create -- \
> vsched --fill-rate 30 \
> --interval 100 \
> --tokens 100 \
> --tokens_min 30 \
> --tokens_max 200 \
> --cpu_mask 0 -- /bin/my_cpu_hog
> While playing with this stuff I've run into situations where a context has
> no tokens left, at which point you cannot even kill the processes in it.
> Don't panic - you can always reenter the context and call vsched with new
yes, this is if the hard scheduler is actually enabled, and
either the minimum is not reached yet, or the context is
paused (a special flag) then the process will enter the new
'H' (on hold) state which doesn't allow it to do anything
until the minfill has been reached again ...
> I think that's about it.
I'm pretty sure it does,
> Vserver mailing list
Vserver mailing list