Re: [vserver] Re: [Freedombox-discuss] [vserver] Re: A software architecture for the FreedomBox

From: Gordan Bobic <gordan_at_bobich.net>
Date: Thu 14 Apr 2011 - 23:50:06 BST
Message-ID: <4DA77A1E.20503@bobich.net>

On 04/14/2011 11:21 PM, Martin Fick wrote:
> --- On Thu, 4/14/11, Gordan Bobic<gordan@bobich.net> wrote:
>>> Actually, I can think of a new approach to unification that
>>> might have some benefits which the current approach does not
>>> have. Currently, I use vservers with drbd, and each vserver
>>> has its own partition (so that each vserver can failover
>>> independently). The separate partitions means that I cannot
>>> use unification with my vservers.
>>
>> Why do you need separate partitions? Why not have a single
>> partition, mirrored between the hosts? All guests are in a
>> seprate /vserver subdirectory anyway. What are you gaining
>> from having a separate partition per guest?
>
> I think you missed the "independently part". :) A
> single partition cannot be mounted on both hosts at
> the same time with drbd.

DRBD can do active-active, but you'll need a cluster FS to achieve that.
And no such fs has vserver patches for COW hardlinks. But yes, I can
understand what you mean. However - what use-case do you have where one
guest will fail unrecoverably on one machine but resumes working on
another machine with the exact same FS? In what case would a single
guest fail without all of them failing?

>>> Ideally, it would be nice to have a COW like hard link
>>> mechanism that is able to hardlink to files in other
>>> partitions/fses. That would also help in the case of the
>>> cross snapshotting idea mentioned in the brtfs threads
>>> you linked to. So, how could cross filesystem COW
>>> hardlinks be implemented?
>>
>> I don't think that's implementable with the current file
>> system model. It also wouldn't help you since you would end
>> up with a hard-link pointing to a partition that isn't
>> necessarily the one you have associated with the guest that
>> is failing over, so you'd end up failing over a guest
>> without all it's files being available.
> ...
>> Same inode on a different file system wouldn't mmap to the
>> same place in memory. But I'm still not sure what you are
>> gaining by splitting up the file systems.
>
> I missdescribed it above, my solution did not actually
> have a real cross filesystem hard link. It uses
> staking to simulate one. The share files are on the
> same partition, so they should be the same inode.
> Another benefit to this solution over the current
> vserver solution, is that is should work with any FS.

How will it work safely without the inode being marked CoW?

Gordan
Received on Thu Apr 14 23:50:19 2011

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Thu 14 Apr 2011 - 23:50:19 BST by hypermail 2.1.8