From: Tristan Donaldson (tristan.donaldson_at_skinkers.com)
Date: Tue 05 Oct 2004 - 10:41:12 BST
David MacKinnon wrote:
> Tristan Donaldson wrote:
>> I designed and implemented a setup like this for our company. We run
>> two datastorage servers (primary and backup) which replicate to each
>> other using drbd. We then run 4 servers in front of those which are
>> diskless and boot via PXE and mount there root filesystems via NFSv3
>> to the backend datastorage servers. On top of these front end servers
>> we run vservers which then allow us to seperate all of our services
>> and move them between front end servers to deal with hardware failures
>> and load.
> Your setup sounds much like what I'm planning. We have a primary and
> secondary storage, just starting to test drbd now. We weren't going to
> go the diskless frontends though. Like you we have all services inside
> vservers now (except for two legacy hosting boxes, which we're still
> migrating things from)
We have actually had more issues with the replication (with corruption)
than with the vservers. The main problem with running everything across
this sort of setup is if you need to do any disk maintainance ie fsck.
Then you have to take absolutely everything down. Which we have had to
do a couple of times.
>> You have to be careful in what applications you run on the front end
>> as these need to be nfs nice. But most things work. But we did have
>> problems with mail queues in postfix, which initially caused lots of
>> corruption, but this was fixed by running the mail queue inside a ram
>> disk (initially), and then switched to using a loopback device.
> That's a bit disappointing, we're looking at moving to postfix for mail,
> I thought postfix actually played nicely with NFS. We currently use
> qmail and I really really want to get away from it (don't ask, or I'll
> probably start ranting :p). I suppose we can have the queue on local
> storage on the front-ends, it's just less elegant. :(
Yes, this is exactly what we thought. We did quite a bit of research on
the internet to make sure everything we ran was NFS nice. I didn't test
everything across NFS as we didn't have the machines, I was relying on this:
Once the production servers had arrived and we had set most of it up we
ran into problems regarding mail queues and NFS. Thats when we found
We thought it could be solved by placing some of the mail queue
directories on a local disk, but the whole mail queue must be on the
same drive (something to do with postfix using inode numbers). The
delivery directories can be on NFS.
>> We did have a number of issues with performance of IO. Since we
>> actually have a firewall between the NFS servers and the front end
>> servers, we had performance problems with all of the udp traffic
>> creating states on the firewall. We have changed to using NFS over
>> TCP. We also use NFSv3 rather than NFSv2 as it is a lot faster when
>> running under the sync option (which you have to run).
> I hadn't considered this. Something I'll need to look into.
TCP from what we could tell didn't slow down the NFS much, but during a
failover of the datastorage server from primary to secondary, the NFS
connections seemed to reconnect a lot faster (Probably doesn't need to
detect the timeouts the same way as with UDP).
We did do some Bonnie io tests over nfs, but I can't find the results we
Sorry this message was not quite so much about vservers but about nfs.
Vserver mailing list