Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
 Post subject: UML performance question
PostPosted: Mon Jan 08, 2007 9:45 pm 
Offline
Senior Newbie

Joined: Fri Jun 24, 2005 6:46 pm
Posts: 14
Would disk performance improve if there was a common OS image file that was stored in a RAM disk, and we all used COW files for our individual systems that were stored on magnetic disks?

I can see lots of problems with the idea -- we all use different flavors of linux, for one. And I doubt that the token system could be easily adapted not to charge us for RAM disk accesses.

But putting aside those problems, would such a setup help to alleviate the disk drive bottleneck that VPS systems in general (ie., not just linode) have?

I could see reasons why it wouldn't -- maybe the COW file system works in a way that would require frequent magnetic disk accesses even if we used a RAM disk for the base image. And in the real world, when you're doing something disk intensive, you're probably doing it to your own data, and not to common OS files. So I don't know if it would help much.

I'm not putting this out there as a practical suggestion, because I know it really isn't practical. I'm just curious if it would help performance much.


Top
   
 Post subject:
PostPosted: Mon Jan 08, 2007 9:58 pm 
Offline
Linode Staff
User avatar

Joined: Tue Apr 15, 2003 6:24 pm
Posts: 3090
Website: http://www.linode.com/
Location: Galloway, NJ
My guess is that it would in some cases, but not ours.

* UML's COW implementation requires that a COW image and its derived images to be of the same size.
* Multiple distros imply multiple COW images
* Disk performance bottlenecks most commonly occur from nodes swapping pages in/out to disk

Also, file-backed disk images have a double caching effect -- both the UML and the HOST are caching the data, which is inefficient. In order to avoid the double caching problem that's inherit with going through the host's vfs layer we've been deploying hosts with an LVM backend -- and each Linode is directly accessing their partition(s) through LVM dev nodes...

Our newer, hardware RAID hosts seem to have a much higher tolerance for high disk contention. We're planning on deploying a good number of those hosts in anticipation of a huge resource increase (hint hint). So, that means more RAM for Linodes, less swap -- faster host servers that are more tolerant to thrashers.

-Chris


Top
   
 Post subject:
PostPosted: Tue Jan 09, 2007 7:13 pm 
Offline
Senior Member
User avatar

Joined: Fri Oct 24, 2003 3:51 pm
Posts: 965
Location: Netherlands
caker wrote:
We're planning on deploying a good number of those hosts in anticipation of a huge resource increase (hint hint).


Mmmmm - sounds good!

_________________
/ Peter


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group