I'd be interested to see the I/O (and perhaps CPU) graphs from the dashboard... the current memory usage profile suggests a beefy periodic task that allocates a lot of memory then goes away. There is some amount of "fairness" inherent to the disk scheduling, so this will impact things for a short while even after it is gone.
And yes, I/O wait will make everything feel laggy, although it might be worth running "mtr" from your local machine to your Linode just to make sure you aren't having two unrelated problems.
Here's some fairly optimistic sequential read testing done on one of my whipping boy nodes, performed about ~30 seconds after an I/O intensive task, and then again about a minute after that:
Code:
rtucker@framboise:~$ for i in xvda xvdb xvdc xvdd; do sudo hdparm -t /dev/$i; done
/dev/xvda:
Timing buffered disk reads: 176 MB in 3.26 seconds = 53.95 MB/sec
/dev/xvdb:
Timing buffered disk reads: 182 MB in 3.00 seconds = 60.66 MB/sec
/dev/xvdc:
Timing buffered disk reads: 66 MB in 3.05 seconds = 21.66 MB/sec
/dev/xvdd:
Timing buffered disk reads: 166 MB in 3.24 seconds = 51.27 MB/sec
rtucker@framboise:~$ for i in xvda xvdb xvdc xvdd; do sudo hdparm -t /dev/$i; done
/dev/xvda:
Timing buffered disk reads: 220 MB in 3.05 seconds = 72.05 MB/sec
/dev/xvdb:
Timing buffered disk reads: 236 MB in 3.16 seconds = 74.60 MB/sec
/dev/xvdc:
Timing buffered disk reads: 122 MB in 3.16 seconds = 38.57 MB/sec
/dev/xvdd:
Timing buffered disk reads: 218 MB in 3.05 seconds = 71.51 MB/sec
For comparison, here's what I get on a Linode that does absolutely nothing all day:
Code:
rtucker@sapling:~$ for i in xvda xvdb; do sudo hdparm -t /dev/$i; done
/dev/xvda:
Timing buffered disk reads: 464 MB in 3.01 seconds = 154.38 MB/sec
/dev/xvdb:
Timing buffered disk reads: 256 MB in 1.66 seconds = 154.59 MB/sec
tl;dr: just because it's quiet now doesn't mean it doesn't thrash the disk when you aren't looking.
_________________
Code:
/* TODO: need to add signature to posts */