[00:09] sannes (~ace@bjs1-dhcp153.studby.uio.no) left irc: Quit: Lost terminal [00:09] mhepp (~mhepp@r72s22p13.home.nbox.cz) left irc: Remote host closed the connection [00:36] riel (~riel@nat-pool-bos.redhat.com) left irc: Read error: Connection reset by peer [00:45] riel (~riel@nat-pool-bos.redhat.com) joined #vserver. [00:47] Nero (~nero@203.59.206.68) joined #vserver. [01:05] Nero (~nero@203.59.206.68) left irc: Quit: Save me Jebus! [01:12] alekibango (~john@b59.brno.mistral.cz) left irc: Read error: Connection reset by peer [01:15] alekibango (~john@b59.brno.mistral.cz) joined #vserver. [01:20] riel (~riel@nat-pool-bos.redhat.com) left irc: Remote host closed the connection [01:21] netrose (~john877@cc-ubr03-24.171.20.14.charter-stl.com) left irc: Ping timeout: 483 seconds [01:22] riel (~riel@nat-pool-bos.redhat.com) joined #vserver. [01:35] Nick change: riel -> unriel [01:38] alekibango (~john@b59.brno.mistral.cz) left irc: Quit: Client killed by consultant [01:40] JonB (~jon@kg239.kollegiegaarden.dk) left irc: Quit: zzzzzzzz [01:43] alekibango (~john@b59.brno.mistral.cz) joined #vserver. [02:53] matta (matta@tektonic.net) joined #vserver. [02:58] anyone around? [03:11] surriel (~riel@66.92.77.98) left irc: Ping timeout: 483 seconds [03:16] netrose (~john877@cc-ubr03-24.171.20.14.charter-stl.com) joined #vserver. [03:43] serving (~serving@213.186.189.211) left irc: Ping timeout: 492 seconds [04:17] Zoiah (Zoiah@81.17.52.139) got netsplit. [04:17] ensc (~ensc@134.109.116.202) got netsplit. [04:17] gaertner (~gaertner@212.68.83.129) got netsplit. [04:17] Bertl_zZ (~herbert@212.16.62.51) got netsplit. [04:17] mdaur_ (mdaur@80.145.113.103) got netsplit. [04:17] mdaur_ (mdaur@80.145.113.103) returned to #vserver. [04:17] Bertl_zZ (~herbert@212.16.62.51) returned to #vserver. [04:17] gaertner (~gaertner@212.68.83.129) returned to #vserver. [04:17] ensc (~ensc@134.109.116.202) returned to #vserver. [04:17] Zoiah (Zoiah@81.17.52.139) returned to #vserver. [04:46] Zoiah (Zoiah@81.17.52.139) got netsplit. [04:46] ensc (~ensc@134.109.116.202) got netsplit. [04:46] gaertner (~gaertner@212.68.83.129) got netsplit. [04:46] Bertl_zZ (~herbert@212.16.62.51) got netsplit. [04:46] mdaur_ (mdaur@80.145.113.103) got netsplit. [04:47] mdaur_ (mdaur@80.145.113.103) returned to #vserver. [04:47] Bertl_zZ (~herbert@212.16.62.51) returned to #vserver. [04:47] gaertner (~gaertner@212.68.83.129) returned to #vserver. [04:47] ensc (~ensc@134.109.116.202) returned to #vserver. [04:47] Zoiah (Zoiah@81.17.52.139) returned to #vserver. [04:53] Bertl around? [05:06] Zoiah (Zoiah@81.17.52.139) got netsplit. [05:06] ensc (~ensc@134.109.116.202) got netsplit. [05:06] gaertner (~gaertner@212.68.83.129) got netsplit. [05:06] Bertl_zZ (~herbert@212.16.62.51) got netsplit. [05:06] mdaur_ (mdaur@80.145.113.103) got netsplit. [05:07] mdaur_ (mdaur@80.145.113.103) returned to #vserver. [05:07] Bertl_zZ (~herbert@212.16.62.51) returned to #vserver. [05:07] gaertner (~gaertner@212.68.83.129) returned to #vserver. [05:07] ensc (~ensc@134.109.116.202) returned to #vserver. [05:07] Zoiah (Zoiah@81.17.52.139) returned to #vserver. [05:14] netrose (~john877@cc-ubr03-24.171.20.14.charter-stl.com) left irc: Ping timeout: 480 seconds [05:18] _Zoiah (Zoiah@81.17.52.139) joined #vserver. [05:20] Zoiah (Zoiah@81.17.52.139) left irc: Ping timeout: 480 seconds [05:34] serving (~serving@213.186.189.216) joined #vserver. [05:42] NZ|Egor (~sgarner@210.54.177.190) joined #vserver. [05:43] NZ|Egor (~sgarner@210.54.177.190) left #vserver. [05:44] Simon (~sgarner@210.54.177.190) joined #vserver. [05:46] Action: Simon prods Bertl_zZ [06:06] ensc (~ensc@134.109.116.202) got netsplit. [06:06] gaertner (~gaertner@212.68.83.129) got netsplit. [06:06] Bertl_zZ (~herbert@212.16.62.51) got netsplit. [06:06] mdaur_ (mdaur@80.145.113.103) got netsplit. [06:07] mdaur_ (mdaur@80.145.113.103) returned to #vserver. [06:07] Bertl_zZ (~herbert@212.16.62.51) returned to #vserver. [06:07] gaertner (~gaertner@212.68.83.129) returned to #vserver. [06:07] ensc (~ensc@134.109.116.202) returned to #vserver. [07:07] netrose (~john877@cc-ubr03-24.171.20.14.charter-stl.com) joined #vserver. [07:31] Simon (~sgarner@210.54.177.190) left irc: Ping timeout: 513 seconds [07:33] Simon (~sgarner@apollo.quattro.net.nz) joined #vserver. [07:43] ensc (~ensc@134.109.116.202) got netsplit. [07:43] gaertner (~gaertner@212.68.83.129) got netsplit. [07:43] Bertl_zZ (~herbert@212.16.62.51) got netsplit. [07:43] mdaur_ (mdaur@80.145.113.103) got netsplit. [07:44] mdaur_ (mdaur@80.145.113.103) returned to #vserver. [07:44] Bertl_zZ (~herbert@212.16.62.51) returned to #vserver. [07:44] gaertner (~gaertner@212.68.83.129) returned to #vserver. [07:44] ensc (~ensc@134.109.116.202) returned to #vserver. [07:57] ensc (~ensc@134.109.116.202) got netsplit. [07:57] gaertner (~gaertner@212.68.83.129) got netsplit. [07:57] Bertl_zZ (~herbert@212.16.62.51) got netsplit. [07:57] mdaur_ (mdaur@80.145.113.103) got netsplit. [07:57] gaertner (~gaertner@212.68.83.129) returned to #vserver. [07:57] mdaur_ (mdaur@p509159E8.dip.t-dialin.net) joined #vserver. [07:58] ensc (~ensc@134.109.116.202) returned to #vserver. [08:08] Bertl_zZ (~herbert@212.16.62.51) got lost in the net-split. [09:35] alekibango (~john@b59.brno.mistral.cz) left irc: Remote host closed the connection [10:50] loger joined #vserver. [11:50] mhepp (~mhepp@r72s22p13.home.nbox.cz) joined #vserver. [11:53] mhepp (~mhepp@r72s22p13.home.nbox.cz) left irc: Remote host closed the connection [12:57] Simon (~sgarner@apollo.quattro.net.nz) left irc: Quit: so long, and thanks for all the fish [14:08] Nick change: _Zoiah -> Zoiah [14:39] virtuoso (~shisha@tower.ptc.spbu.ru) left irc: Ping timeout: 492 seconds [14:42] virtuoso (~shisha@tower.ptc.spbu.ru) joined #vserver. [14:42] re :) [16:10] say (~say@212.86.243.154) joined #vserver. [16:10] hi all! [16:12] matta: hi, i sent to Alex patch about your oops in raw sockets yesterday night. [16:29] loger joined #vserver. [16:30] people, is there matta here? [17:20] Nick change: unriel -> riel [17:24] RH_ (~john877@cc-ubr03-24.171.20.14.charter-stl.com) joined #vserver. [17:24] netrose (~john877@cc-ubr03-24.171.20.14.charter-stl.com) left irc: Read error: Connection reset by peer [17:25] netrose (~john877@24.171.20.14) joined #vserver. [17:25] RH_ (~john877@cc-ubr03-24.171.20.14.charter-stl.com) left irc: Read error: Connection reset by peer [18:07] virtuoso (~shisha@tower.ptc.spbu.ru) left irc: Ping timeout: 480 seconds [18:11] virtuoso (~shisha@tower.ptc.spbu.ru) joined #vserver. [18:35] LL0rd (dr@pD9507ED5.dip0.t-ipconnect.de) joined #vserver. [18:36] hi, can anybody tell me, how to prevent the listening on all IPs of the master server by a vserver client? [19:02] matta (matta@tektonic.net) left irc: Quit: Hey! Where'd my controlling terminal go? [19:14] netrose (~john877@24.171.20.14) left irc: Ping timeout: 513 seconds [19:25] mhepp (~mhepp@r72s22p13.home.nbox.cz) joined #vserver. [19:33] noel- (~noel@80.142.159.199) joined #vserver. [19:40] noel (~noel@80.142.187.92) left irc: Ping timeout: 483 seconds [20:02] Bertl (~herbert@MAIL.13thfloor.at) joined #vserver. [20:03] hi all! [20:09] Nick change: noel- -> noel [20:14] virtuoso (~shisha@tower.ptc.spbu.ru) left irc: Ping timeout: 483 seconds [20:42] serving (~serving@213.186.189.216) left irc: Ping timeout: 480 seconds [20:43] shadow (~umka@212.86.233.226) joined #vserver. [20:43] Hi all [20:43] hi alex! [20:44] Hi Herbert [20:48] @alex I did not understand your reply to my bind9 posting ... could you explain? [20:53] Bertl> from capabilites opened with CAP_SYS_RESOURCE - for bind need only one = /* Override resource limits. Set resource limits. */ [20:55] yeah, and I don't know why it would need that anyway ... ;) [20:56] maybe my suggestion to have more fine grained capabilities for vserver isn't so useless at all ... *G* [20:58] :) [21:04] matta (matta@tektonic.net) joined #vserver. [21:04] hi! [21:05] hi matt! [21:05] Bertl I actually had a question for you... [21:05] netrose (~john877@cc-ubr03-24.171.20.14.charter-stl.com) joined #vserver. [21:05] for quota support in vservers with your vr patch [21:05] I just needed the vr0.13 patch correct? [21:05] Hi Matt [21:05] hey, now I feeling useful again ;) [21:06] hi alex! [21:06] i got quotacheck working [21:06] but it doesn't update the used space [21:07] i think it may be a problem with my host server though, odd error.. [21:07] [root@vps4 conf]# quotaon -a [21:07] quotaon: using /home/aquota.group on /dev/hda7 [/home]: No such device or address [21:07] quotaon: using /home/aquota.user on /dev/hda7 [/home]: No such device or address [21:07] this is on the host server? [21:08] yes [21:08] ever seen that before? [21:08] [P] ls -la /dev/hda7 [21:08] brw-rw---- 1 root disk 3, 7 Apr 11 2002 /dev/hda7 [21:08] /dev/hda7 on /home type ext3 (rw,usrquota,grpquota,tagctx) [21:08] did you add a quota hash? [21:08] really odd [21:08] yes... [21:09] do I need modified quota-tools on host server? [21:09] which quotatools version? [21:09] quota-3.06-9.7 [21:10] hmm, I used 3.07 IIRC, and would suggest 3.09 but the same tools work _inside_ a vserver? [21:10] well, quotacheck works [21:10] i had a question about that actually [21:10] with vrsetup [21:11] do I need to create a vroot device per disk or per context? [21:11] quotacheck doesn't do any relevant interaction with the kernel ... [21:11] you need a vroot per disk/partition ... [21:11] ok [21:11] should quotaon be run from within the vserver? [21:12] if you want quota for that vserver, yes of course ... [21:12] hrm [21:12] so then I need the cap_quotactl patch and the vserver-0.23qc tools ? [21:13] really depends ... in c17f the cap_quotactl is included ... (I hope, let me see ;) [21:13] oh [21:13] i only have c17e [21:13] fuck. [21:13] oh well guess this needs to be done later then.. [21:13] c17e does not include it ... [21:13] so end of that subject [21:14] another question... will 1.0 be released soon, and will that released have re-diffed versions of rmap, O(1), ml, dl ? [21:14] but it is strange, you should not receive an error if the tools are correct on the host ... [21:14] quotactl(0xff800002 /* Q_??? */|USRQUOTA, "/dev/hda7", 2, {1836017711, 1902194533, 1635020661, 1702065454, 114, 17, 1986356271, 1633970223}) = -1 ENXIO (No such device or address) [21:14] write(2, "quotaon: ", 9quotaon: ) = 9 [21:14] write(2, "using /home/aquota.user on /dev/"..., 72using /home/aquota.user on /dev/hda7 [/home]: No such device or address [21:14] ) = 72 [21:15] 1.0.0 will be released soon, none of the mentioned features will be included in this 'stable' release, but all patches will be rediffed/available on demand ... [21:15] Bertl: including O(1)? how is work progressing on thaT? [21:15] well, no work, and yet, no feedback from any tester :( [21:15] well [21:15] i will try to test then [21:16] seeing as I have a SMP server with 1000+ procs the O(1) scheduler would be a huge benefit [21:16] so it is important to me [21:16] sam had a look at it, but nothig conclusive ... [21:16] i saw his mail.. did you try what he said? [21:17] guess we'll have to add some knobs to allow the bucket hash to be tuned ... [21:17] nope, had no time for testing yet ... [21:17] what approach did you take btw? I know Sam had a "25%" compile time limit (tuneable) [21:17] isn't your approach more auto-tuning ? [21:18] nope, as I wrote in the mail, I grabbed the -aa scheduler, removed everything not really required, grabbed sam's scheduler stuff for ctx17 and merged both ... [21:19] oh [21:19] so it's the same [21:19] so it should have/provide the same tunability like the original version ... [21:19] is it difficult to add sysctl tuneables ? [21:19] not exactly, but basically yes ... I wasn't sure anybody would like to have this stuff anyway ;) [21:19] okay I'm lagging behind: sync! sync! [21:20] in my opinion the O(1) scheduler is a big point [21:20] not really difficult, I would suggest some per context tuning and some general parameters ... [21:20] right now it's easy for a vserver to hog the CPU [21:21] Bertl: well, my thought on it is that a server may start with 4 vservers and after a few months be at 30 [21:21] in which case the scheduler would need to be changed how it works [21:21] and people shouldn't have to reboot for that [21:21] I would suggest to combine the new syscall version (c17g) and the scheduler, so we can add some syscalls to modify/tune the scheduler ... [21:25] Bertl: that would work [21:26] I think i've already covered all the reasons I think the O(1) scheduler is important to have. [21:27] right now it's entirely possible for 1 vserver to hog the cpu even using the sched flag [21:27] even with a nice of 19 [21:27] hmm, yes in a unfortunate situation ... [21:27] well, like I have a customer who runs "safelists" [21:27] but I'm not sure the O(1) scheduler will actually solve that ... [21:28] and he may be running up to 100 exim processes at a time [21:28] and it drives the loads up to 8 and makes everything very sluggish [21:28] Bertl: well, in sam's e-mail to you he seemed to hint it would [21:28] you want CKRM to solve this problem [21:29] i can understand under heavy load the cpu pegged at 0% idle [21:29] on a typical vserver host (SMP, big RAM) the cause for the sluggishness will probably be the I/O system ... [21:29] but low usage contexts should receive a 'bigger slice of the pie' [21:29] for 2.4 you could hack up my fair scheduler patch to work on a per vserver basis [21:29] Bertl: well, additionally O(1) will help with scheduling and SMP scalability on those servers [21:29] shouldn't take most of you more than 30 minutes to make that happen [21:30] riel: i remember looking at that a long time ago... [21:30] riel: patches? [21:30] :) [21:30] lol [21:30] I bet Bertl or mcp could do a port of that patch to schedule on a vserver basis instead, within 30 minutes or so [21:31] Bertl: ? [21:31] http://surriel.com/patches/ has the thing [21:31] hmm, the issue with scheduler stuff is, there is no way to test it, but in simulations and more simulations or on production systems ;) [21:31] http://fairsched.sourceforge.net/ [21:32] well, riel's patch has undergone much scutiny [21:32] Bertl: unless you use a scheduler patch that is simple enough you can understand it within 30 minutes [21:32] and the question is, now that we have a O(1) scheduler, do you want 'another' solution? [21:32] i remember finding many mailing list threads where people reported good results [21:32] my scheduler patch is absolutely not fancy and doesn't solve all your problems ... but it is simple enough to understand within an hour [21:32] riel: but I see what Herbert is thinking [21:32] Bertl: is the O(1) scheduler the answer to matta's problem ? [21:32] the O(1) scheduler solves a different problem [21:33] riel: that's the question we don't know :) [21:33] don't get me wrong, I ported a _lot_ of use-full/less stuff ... so this shouldn't be an issue ;) [21:33] riel: what about your patch AND the O(1) ? do they conflict too much? [21:33] yes, they conflict too much ;) [21:33] hrm [21:33] my scheduler patch is _less_ efficient than the mainstream 2.4 scheduler [21:33] regarding the scheduler, everything conflicts ;) [21:33] but it does allow you to fairly divide time between vservers [21:34] whoa, not yet ;) [21:34] well, in theory I think the O(1) for be more beneficial for large servers [21:34] let's say hypothetically we have a dual processor server with 4000 processes [21:34] it's going to help a lot as far as scalability [21:35] @riel is 2.4.19 really the most recent version 8-) [21:35] or what about a quad processor server (real or hyperthreading) [21:35] in the long run I think we will see vserver being run on larger hardware [21:35] like RS/6000 ;) [21:35] as companies may choose to purchase 1 large server instead of many low end [21:36] like at a development company where I was the neteng/sa we had a room full of old pentiumIII servers [21:36] each doing something like mail, dns, webserver... [21:36] low load, but each needed different software versions or something [21:36] maybe we should start by 'looking' at the problem ... [21:36] in that situation it may have been better to buy a quad xeon and run vserver on it [21:37] Bertl: 2.4.20 I think, maybe even .21 [21:37] matt, you say you had some scenarios, where the current combo (mainstream/c17f) doesn't do what you expect ... [21:37] however, nothing really changed in the scheduler [21:37] this is what I know: [21:38] Customer was doing much e-mail, almost always 50 'exim' processes running at a time, sometimes up to 100 [21:38] he was running with the 'sched' flag and had a S_NICE value of 19 [21:38] it's ok if that customer runs slow [21:38] riel: yes [21:38] it is not ok if that customer slows down the rest of the system [21:39] optimally, the scheduler should use the context's nice value as a key [21:39] or perhaps a new value in the config file [21:39] riel: exactly. [21:39] http://surriel.com/patches/2.4/ 2.4.19 is the only one :( [21:40] Bertl: hrmmmm, let me search for it [21:40] an interesting note is that according to sar, the server normally runs at 1.5 procs/s, with this vserver running it jumped to 600 procs/s [21:40] maybe the scheduler changed too little to bother putting a new patch online [21:40] most likely because of exim processes spawning [21:41] well, probably you had a lot of forks/resource request/releases ... [21:41] oh I found it [21:41] http://fairsched.sourceforge.net/ [21:41] what about that? [21:41] oh, seems outdated.. [21:42] last patch is against 2.4.0-test3 [21:42] well, maybe we can rediff the comments then? [21:42] http://imladris.surriel.com/nlo-fairsched.patch [21:43] hate that bitkeeper patches, against what version is this? [21:43] http://www.cocodriloo.com/~wind/fairsched/ [21:43] that includes notes for porting your patch to O(1) [21:43] something recent [21:43] like 2.4.20 or so [21:44] matta: does it work ? ;) [21:44] (it might, you just need to test it) [21:44] http://www.cocodriloo.com/~wind/fairsched/fairsched-2.5.66-A5-patch [21:45] Pending work [21:45] Make it work on UML. [21:45] Make it work on a UP physical machine. [21:45] this looks interesting (if it works) [21:45] Make it work on a SMP physical machine. [21:45] doesn't look too interesting, to be honest [21:45] :) [21:45] for 2.6 we can simply use the CKRM framework [21:45] guess that answers the 'will it work' question ... [21:45] and backporting a 2.5 patch to 2.4 could be a lot more work than what you want for one feature [21:46] riel> CKRM not best for all.... [21:46] not yet [21:46] but CKRM will probably end up being better than the less flexible hacks [21:47] I know it will be a lot of work to make CKRM good quality [21:47] but I suspect it'll be worth it [21:47] I'm sure CKRM will rule the world, if development runs in the right direction %) [21:48] right now my biggest worry is that I can't see the direction CKRM is taking [21:48] was it straight left or just right forward? [21:49] it's hidden inside IBM ;/ [21:49] no community participation yet [21:50] I suppose I could blame myself for that, but I'm still reading selinux code ;) [21:50] netrose (~john877@cc-ubr03-24.171.20.14.charter-stl.com) left irc: Ping timeout: 480 seconds [21:50] where can I grab 2.4.23-pre7 tarball? [21:50] well I would say, if somebody is actively interested in improving the O(1) scheduler/ctx interaction, which could be beneficial for 2.6 too (hint! hint!) we can add some knobs/develop some tests ... [21:50] Bertl: i'm looking to apply the patch now on test server [21:51] just to see what I can test [21:51] grab 2.4.22 tarball and 2.4.23-pre7 patch ... [21:51] h [21:51] there is no way I'll be making vserver specific scheduler hacks for 2.6 [21:51] any scheduler changes should be generic, IMHO [21:51] usable for much more than just vserver [21:52] hmm, so you _plan_ to have vserver running without a special scheduler adaptation? [21:53] no, I plan to have a scheduler change that can be used for non-vserver stuff, too [21:53] i don't understand [21:53] scheduler has to know about contexts [21:53] scheduler should be able to schedule fairly between random resource containers [21:53] hmm, interesting times ... hope rik starts soon with the 2.6 port ;) [21:53] Bertl: agreed :) [21:53] for vserver you would associate one resource container with each vserver [21:53] but you could also use the resource classes for something else [21:54] say, dividing resources between apache, dns and the mta [21:54] yeah, we had associative arrays and associative wossnames, now we'll have associative containers too ... [21:55] I'll buy it as soon as I see some working code ... [21:55] maxinum number of cpu's kernel option? never seen that one before :) [21:55] agreed on that point [21:56] the code is important [21:56] @matt so you never used my patchsets then *grrr* [21:57] Bertl: eh? [21:57] Bertl: i don't see it on c17e... [21:57] http://www.13thfloor.at/VServer/patches-2.4.22-c17/06_4_max_nr_cpu.patch.bz2 [21:57] no [21:57] and several kernel before ... [21:58] now it seems to ahve made it into the kernel ;) [21:58] i don't use any of those patches no [21:58] i don't know what they are/do [21:58] as I said ... [21:58] some look interesting, but I don't know what "libata updates" does [22:06] okay, have to leave now ... cu l8er ... [22:06] Nick change: Bertl -> Bertl_oO [22:06] bye Herbert [22:06] exit [22:14] lol [22:14] ? [22:30] mmmmm, looks like it might be possible do about half of vserver using the selinux code [22:30] that reduces the code impact quite a bit [22:31] not a half. [22:32] all the access separation [22:32] reduces the size of the patch quite a bit [22:32] not all access [22:33] how i see - selinux not separate access to network subsystem. [22:35] or i wrong ? [22:38] yeah, just access between processes [22:38] and access to certain network addresses [22:38] not all the virtualisation can be done using selinux [22:38] just like 2.4 vserver already uses the vfs layer to take care of the filesystem separation [22:39] there is no special vserver_chroot code [22:39] it just reuses what's already there [22:40] riel> you see herberts post in linux-kernel@ about problems with CLONE_NEWNS and pivot_root ? [22:40] yeah, I saw it [22:40] I wish I understood all of it [22:40] I don't know enough about the VFS layer, yet [22:41] kloo (~kloo@213-84-79-23.adsl.xs4all.nl) left irc: Read error: Connection reset by peer [22:41] problem is easy - opened shared libraries. [22:42] for skip it - need special function - migrate to namespace. [22:42] those get closed at exec() time ;) [22:42] not. [22:43] not? [22:43] not. [22:44] it can be skip if comple static binaries but i don`t test it. [22:44] serving (~serving@213.186.189.216) joined #vserver. [22:48] noel (~noel@80.142.159.199) got netsplit. [22:51] how i now close on exec - do close only files opened process - not a nmaped to his address space. because nmaped file not a stored in task->files. [22:51] i wrong ? [22:53] JonB (~jon@kg88.kollegiegaarden.dk) joined #vserver. [22:53] noel (~noel@80.142.159.199) returned to #vserver. [22:56] hrm [22:56] how does linux generate it's md5 passwords? doesn't look like a normal md5.. [22:57] matta> whait is normal md5 ? [22:58] 5f4dcc3b5aa765d61d8327deb882cf99 [22:58] i guess it has a salt in there [22:58] not [23:00] '$' not found... [23:00] n/m i got it [23:01] mhepp (~mhepp@r72s22p13.home.nbox.cz) left irc: Remote host closed the connection [23:03] Means salt - fixed and not stored. [23:07] mhepp (~mhepp@r72s22p13.home.nbox.cz) joined #vserver. [23:09] shadow: yeah, you're wrong ;) [23:09] shadow: on exec() all mmap()d areas are unmapped [23:12] hm... [23:13] matta> i see man on linux & freebsd same format for md5 passwords. [23:14] riel>nmaped aries stored in task->files ? or not ? [23:14] I'm not sure, but I think they're not [23:15] but close on exec - closed only this files. [23:16] if nmaped file not stored in task->files = close on exec - not unmap areas [23:16] please take a look at exit_mmap() [23:17] it on exit_task. [23:17] but not in fork new. [23:18] well, something similar is called [23:18] just read the exec code [23:19] point in kernel ? [23:21] i not see call exit_mmap in fork.c. [23:21] fs/exit.c [23:21] umm exec.c [23:21] well, exec is not fork ;) [23:26] i greep in fs/* for use close_on_exec - and see it applyed only for task->files. [23:26] Thus pivot_root can not be used at start of the programs using dynamic library. [23:28] Means can not be used for creation new namespace. [23:28] Or you disagree? [23:28] ok, let me point out the line to you ;) [23:29] okey :) [23:30] fs/exec.c do_execve() calls search_binary_handler() [23:31] no wait, that's not the stuff I'm searching for [23:32] i see :) [23:33] ahhh, look at setup_arg_pages() [23:34] in file ? [23:34] ok. found. [23:35] it`s setup stack pages.... [23:35] in exec.c it's indirectly called [23:36] inside search_binary_handler(), "retval = fn(bprm, regs);" calls load_elf_binary(), which calls setup_arg_pages() [23:36] i see. [23:36] so do_execve() -> search_binary_handler() -> load_XXXXX_binary() -> setup_arg_pages() [23:37] and setup_arg_pages() destroys the memory of the process, if successful [23:37] but where nmap currently used nmaped areas ? [23:37] but where unnmap currently used nmaped areas ? [23:37] sorry. [23:37] the executable and libraries are all mmaped [23:37] look at setup_arg_pages ;) [23:38] no wait [23:38] riel - it setup _STACK_ pages. [23:38] by the time setup_arg_pages() is called, the mm is already destroyed [23:39] load_elf_binary() calls flush_old_exec() [23:39] and it called in exec (not a fork) new binary. [23:39] ok. i see flush_old_exec. [23:39] which calls exec_mmap() [23:40] which is only 2 letters different from exit_mmap(), which is why I got confused ;) [23:40] but it calls exit_mmap(), if it is the last user of an mm [23:42] netrose (~john877@24.171.20.14) joined #vserver. [23:43] :) [23:55] :> [23:55] whel. i going to sleep. [23:55] bye to all [23:56] shadow (~umka@212.86.233.226) left irc: Quit: new day been created. [00:00] --- Fri Oct 17 2003