Date: Fri, 21 Feb 2003 01:23:55 +1300 From: Sam Vilain Reply-To: vserver@solucorp.qc.ca To: vserver@solucorp.qc.ca Subject: [vserver] O(1) s_context CPU scheduling Hi all, I have a preliminary patch for the Alan Cox kernel for vserver, including O(1) scheduling. This is implemented with a per-s_context `Token Bucket' that counts jiffies consumed by processes belonging to an s_context. The token bucket has the following tuning options: * Fill rate (N tokens per Y jiffies) * Bucket size In the timer tick function, a token is removed from the bucket of whichever security context is currently running. At process rescheduling time, the number of owed tokens is calculated and added to the bucket. The token bucket is assumed to be balanced when it is half full; in this condition, the process being scheduled gets no priority bonus. If it is completely full, the process gets a -5 priority bonus. If it is empty, +15 (it's parabolic; I tried geometric but it didn't seem to be enough for some simple test cases). The scheduling is not `hard'; rather it relies on the feedback loop created between the token bucket and the process' priority. In the case of only a few running processes in particular, it does not appear to do much. But you can see the priority go up and down with `vtop'. In theory, you could set a fraction of the CPU allocated per s_context, to individually control processor time allocated to each virtual server. However, I have yet to design a syscall interface for that and have currently hardcoded it as each s_context receiving 1/4 of the CPU. This is set in kernel/sys.c. The attached patch is against 2.4.20-ac2 with the 2.4.20ctx-16 patch applied. I'd be interested in any comments or suggestions of how best to leverage this feature from userspace! -- Sam Vilain, sam@vilain.net Real Programmers don't write in LISP. Only faggot programs contain more parenthesis than actual code.