Re: [vserver] Avoiding kernel internal routing among vserver clients

From: Konrad Gutkowski <konrad.gutkowski_at_ffs.pl>
Date: Wed 08 Aug 2007 - 12:26:13 BST
Message-ID: <CD5FBC0544334C5298DD0E4DDE0C9FCC@konrad>

You could use bridge with ebtables for filtering and send all packets to GW
with dnat... just a loose idea, dont know if it would work or not (or even
if you can add br interface to a vserver)

Konrad Gutkowski

----- Original Message -----
From: "Herbert Poetzl" <herbert@13thfloor.at>
To: "Thomas Weber" <l_vserver@mail2news.4t2.com>
Cc: <vserver@list.linux-vserver.org>
Sent: Wednesday, August 08, 2007 11:06 AM
Subject: Re: [vserver] Avoiding kernel internal routing among vserver
clients

> On Wed, Aug 08, 2007 at 03:25:29AM +0200, Thomas Weber wrote:
>> Am Mittwoch, den 08.08.2007, 01:48 +0200 schrieb Herbert Poetzl:
>>
>> > > That was like the first thing i've tried. Routing to anything thats
>> > > not locally hosted works just fine. But once you try to reach another
>> > > vserver on another subnet that happens to be hosted on the same host
>> > > it will route internally and not hit the wire at all - which is bad
>> >
>> > which is actually quite good, as it avoids flooding
>> > the net (even a local network) with unnecessary
>> > packets ...
>>
>> don't try to sell me this as a feature :-)
>
> that's not a feature, that is how Linux networking
> works, and most folks see it as advantage to have
> lightning fast networking between different guests
> on the same host ...
>
>> If I could opt in/out I'd agree.
>> > From inside the vserver you just see your one interface and wouldn't
>> > expect certain packets to be routed completely different than the
>> > rest.
>
> but you do expect that local lan traffic does not
> go over the gateway on a typical setup :)
>
>> > > and actually makes vservers unusable if you want to move vservers
>> > > among different hosts.
>> >
>> > why do you think so? at least exactly this setup
>> > works perfectly fine here ...
>> >
>> > > Firewalling between the vserver clients for example is not
>> > > manageable.
>
>> > you just make the firewall rules for ethX _and_ lo
>> > and you are perfectly fine, wherever the guest is
>>
>> 3 hosts, 2 production, one for development/testing, later maybe more.
>> I'd have to manage firewalling rules on the GW and on 3 hosts. The
>> one responsible for the GW is not the one responsible for the vserver
>> hosts. Managing 3 different systems (GW, production,development) with
>> their own firewalling semantics for the same rules on 4+ boxes is
>> asking for trouble.
>
>> Don't you think that'd be bad design?
>
> if you go for a completely virtualized network stack
> (mainline is working on that already) and do not mind
> the larger overhead in resources and the drastically
> increased traffic on your DMZ network, as well as the
> lower network performance (virtualization here has
> quite noticeable overhead too) instead for lightweight
> IP isolation (that is what Linux-VServer is doing),
> then you can get your setup where all traffic (even
> naturally host local traffic' is routed to the
> gateway and back again ...
>
> you can also do some tricky NAT-ing and make the out-
> going IPs become non-local (as I showed in a quite old
> ML posting) but I would not suggest to do so ...
>
>> > > IDS would be another issue.
>> >
>> > assuming that IDS stands for Intrusion-Detection System
>> > what problem do you see with that?
>>
>> IDS setup on the GW won't see all vserver-vserver traffic.
>> Same with accounting etc.
>
>> In case of an incident when one of the production machines goes down
>> and the other hosts all vservers, accounting would show less traffic
>> and the IDS wouldn't see anything at all.
>
> yeah, maybe Xen or even QEMU is a better approach
> for your specific requirements ...
>
> best,
> Herbert
>
>> Tom
>
Received on Wed Aug 8 12:26:21 2007

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Wed 08 Aug 2007 - 12:26:23 BST by hypermail 2.1.8