Best-case, you'll get four cores worth of CPU. Worst-case, you'll get a fraction of one CPU. Most Linodes use relatively little CPU, so the average case is pretty close to the best case -- I'm not sure anything close to the worst case has been seen in the real world. Disk and network I/O tend to be the more common bottlenecks.
Try using "pbzip2" or another parallelizing compression tool on a large test file, while watching "htop"... outside of disk I/O, compression is a very CPU-intensive task and you can probably get darned close to 400% most of the time.
How are you testing your pages/second capacity? 20 pages/second sounds really low for serving static files. Here's what I get for a small static HTML page using ab from the server itself:
Code:
rtucker@framboise:~$ ab -n 10000 -c 100 http://hoopycat.com/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Server Software: lighttpd/1.4.19
Document Length: 2260 bytes
Time taken for tests: 1.417060 seconds
Requests per second: 7056.86 [#/sec] (mean)
Time per request: 14.171 [ms] (mean)
Time per request: 0.142 [ms] (mean, across all concurrent requests)
Transfer rate: 17387.41 [Kbytes/sec] received
Running it against my PHP-based blog (note drop in -n and -c; it degrades considerably for -c larger than 10, which I'm OK with):
Code:
rtucker@framboise:~$ ab -n 1000 -c 10 http://blog.hoopycat.com/
Document Length: 88334 bytes
Time taken for tests: 3.907850 seconds
Requests per second: 255.90 [#/sec] (mean)
Time per request: 39.079 [ms] (mean)
Time per request: 3.908 [ms] (mean, across all concurrent requests)
Transfer rate: 22156.17 [Kbytes/sec] received
That's on a Linode 360, running Ubuntu 8.04, lighttpd, php via fastcgi (tcp), xcache, and b2evolution 3.3.3.
Of course, running ab from my house is a lot worse and causes my NAT router to glow red.