From: Herbert Poetzl (herbert_at_13thfloor.at)
Date: Fri 01 Apr 2005 - 20:06:27 BST
On Fri, Apr 01, 2005 at 12:23:00PM -0600, Matthew Nuzum wrote:
> > On Thu, Mar 31, 2005 at 09:22:10PM -0600, Matthew Nuzum wrote:
> > > I think I can create a test case for this. I have a server that is not
> > > currently running any vserver stuff that will be ok with a reboot now
> > and
> > > then.
> > sounds good, please try to get 18.104.22.168 working there,
> > because it already contains some blkio accounting
> > and it would be very interesting to monitor those
> > values ... (maybe with rrdtools)
> > TIA,
> > Herbert
> I'm still doing my month-end backup, but when that's done I'll start
> installing the vserver 22.214.171.124.
> Here is the test case that seems most logical to me, but advice on how to
> actually do concrete tests would be useful.
> 1. Create two vservers (vsa and vsb), start both.
> 2. In vsa start some heavily i/o intensive operation
> 3. In vsb try to do some tasks and notice how much i/o bandwidth I have
> Alternative plan:
> 1. Create 1 vserver and start it
> 2. In the vserver, start some heavily i/o intensive operation
> 3. In the host server try to do some tasks and notice how much i/o bandwidth
> I have available
> 4. After step 2 completes, in host server start a heavily i/o intensive
> 5. In vserver, try to do some tasks and notice how much i/o bandwidth I have
> I have two ideas on heavily i/o intensive operation
> 1. I have a database with 35 million records. Doing any aggregate function
> such as max() requires several sequential scans and takes a significant
> amount of time.
> 2. Preparing my month end backup requires copying 13 GB of data.
> Any other suggestions?
> I have only subjectively noticed a dramatic decrease in server performance
> when a vserver is performing i/o intensive tasks. How can I objectively
> measure and produce concrete numbers?
there are two 'aspects' of what you 'experience' as performance
here. first the increased latency when doing I/O (which is the
result of several I/O transactions already going on when you do
whatever you do), and the decreased throughput (which IMHO is
not really the issue here, just think 40MB/s transfer with UDMA
and 4MB/s without ...)
I would suggest to 'test' with different I/O schedulers activated,
because I think that the default I/O scheduler might be sub-optimal
for vserver-type I/O loads anyways ...
TIA (for testing)
> Matthew Nuzum <matt_at_followers.net>
> www.followers.net - Makers of "Elite Content Management System"
> View samples of Elite CMS in action by visiting
> Vserver mailing list
Vserver mailing list