About this list Date view Thread view Subject view Author view Attachment view

From: Jörn Engel (joern_at_wohnheim.fh-wedel.de)
Date: Wed 24 Nov 2004 - 16:26:13 GMT


On Wed, 24 November 2004 14:02:07 +0100, Herbert Poetzl wrote:
> On Wed, Nov 24, 2004 at 05:01:47PM +1300, Sam Vilain wrote:
>
> pages to be swapped out can not easily be assigned
> to a context, this is different for pages getting
> paged in ...

Or any page-fault done for the context, for that matter. There is no
fundamental difference between swapping in, faulting in the
recently-started openoffice, malloc&memset, ...

> > Here's a thought about an algorithm that might work. This is all
> > speculation without much regard to the existing implementations out
> > there, of course. Season with grains of salt to taste.
> >
> > Each context is assigned a target RSS and VM size. Usage is counted a
> > la disklimits (Herbert - is this already done?), but all complex
>
> yep, not relative as with disklimits, but absolute
> and in identical way the kernel accounts RSS and VM
>
> > recalculation happens when somethings tries to swap something else out.
> >
> > As well as memory totals, each context also has a score that tracks how
> > good or bad they've been with memory. Let's call that the "Jabba"
> > value.
> >
> > When swap displacement occurs, it is first taken from disproportionately
> > fat jabbas that are running on nearby CPUs (for NUMA). Displacing
> > other's memory makes your context a fatter jabba too, but taking from
> > jabbas that are already fat is not as bad as taking it from a hungry
> > jabba. When someone takes your memory, that makes you a thinner jabba.
> >
> > This is not the same as simply a ratio of your context's memory usage to
> > the allocated amount. Depending on the functions used to alter the
> > jabba value, it should hopefully end up measuring something more akin to
> > the amount of system memory turnover a context is inducing. It might
> > also need something to act as a damper to pull a context's jabba nearer
> > towards the zero point during lulls of VM activity.
> >
> > Then, if you are a fat jabba, maybe you might end up getting rescheduled
> > instead of getting more memory whenever you want it!
>
> thought about a simpler approach, with a TB for the
> actual page-ins, so that every page-in will consume
> a token, and you get a number per interval, as usual ...

It misses a few corner-cases, but I cannot think of anything better.
More complicated approaches would miss different corner-cases, but
that's not a real advantage.

With the simple TB approach, any IO caused on behalf of a process
gets accounted. Updatedb would be a huge offender, but that looks
more like a feature than a bug.

Jörn

-- 
Victory in war is not repetitious.
-- Sun Tzu
_______________________________________________
Vserver mailing list
Vserver_at_list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


About this list Date view Thread view Subject view Author view Attachment view
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Wed 24 Nov 2004 - 16:26:42 GMT by hypermail 2.1.3