caker wrote:
Should be all set.
This was due to the crappy I/O scheduler in 2.4 kernels, which essentially DoS the box if one (or more) processes (like a UML kernel) decides to consume all the disk bandwidth.
I've been testing 2.6 kernels, so I thought we'd give it a shot:
Code:
[root@host11 vm]# uname -a
Linux host11.linode.com 2.6.3-1 #1 SMP Wed Feb 18 10:18:43 EST 2004 i686 i686 i386 GNU/Linux
-Chris
I think that I/O scheduling is the single most important performance factor for Linodes, and that it should be a very high priority to get this problem solved.
As it stands, Linodes can easily hog all of the I/O bandwidth and the result is that everyone suffers quite badly. host5 seems to be experiencing bad I/O load a couple of times day; this might be recent as I just started tracking it, but I have seen about half a dozen occurrances per day of the load on my Linode going up to 3, 4, 5, or even 12 (!!!) for a few minues at a time. My Linode is unloaded so this is most likely due to some other Linode hammering the disk and processes in my Linode having to wait a long time for small disk requests (either paging in memory or touching the filesystem).
If the 2.6 kernel really does have a way to more fairly distribute the I/O load (so that no one Linode can totally hog all of the I/O), then I am so all for it!