Guspaz wrote:
If you're using fastcgi, you may want to look into lighttpd or nginx
unfortunately, drupal is still a kludge (mod_rewrite? more?) on both lighttpd and nginx ... _not_ using apache adds too many 'gotchas' (for now ...)
JshWright wrote:
Far from encouraging, I'd call that very discouraging.
encouraging only in the sense i'd misunderstood linux caching :-/
JshWright wrote:
You're flushing out a lot of cached disk reads, which is only going to hurt performance later on.
You're only using 87 MB of RAM, might as well let the system use the rest of it to do something useful.
you're absolutely right here. i stopped using that "sync; echo 3 > /proc/sys/vm/drop_caches".
i stepped back and took a look at the _whole_ system, rather than just the (supposed) apc perfomance.
realized that Apache's 'fat', mod_*cache & mod_ssl are sloooow, and learned that drupal's use of caching is underperforming ...
that said, i switched:
Code:
drupal 6.14 -> pressflow-6
installed drupal CacheRouter module, config'd for APC
installed Pound as front-end/proxy, installed SSL certs there
got rid of Apache mod_ssl, mod_cache, mod_file_cache & mod_disk_cache
installed Varnish as a caching , reverse proxy
with that config, i'm _starting_ to do some local hammering with httperf,
Code:
httperf --hog --http-version=1.1 \
--server=my.site.com --uri=/info.php \
--port=443 --ssl --ssl-no-reuse --ssl-ciphers=AES256-SHA \
--send-buffer=4096 --recv-buffer=16384 \
--num-calls=10 --num-conns=5000 --timeout=5 --rate=10
shows a decent CPU load,
Code:
top - 18:35:55 up 21:58, 3 users, load average: 2.29, 8.01, 13.52
Tasks: 102 total, 2 running, 100 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.4%us, 0.3%sy, 0.0%ni, 97.8%id, 1.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 559976k total, 330100k used, 229876k free, 3028k buffers
Swap: 1048568k total, 274004k used, 774564k free, 56232k cached
mem's ok -- much better, relatively, than before,
Code:
free -m
total used free shared buffers cached
Mem: 546 322 224 0 2 55
-/+ buffers/cache: 263 282
Swap: 1023 267 756
and, httperf returns
Code:
Maximum connect burst length: 1
Total: connections 5000 requests 49970 replies 49966 test-duration 499.964 s
Connection rate: 10.0 conn/s (100.0 ms/conn, <=109 concurrent connections)
Connection time [ms]: min 50.6 avg 485.9 max 14368.1 median 67.5 stddev 1537.5
Connection time [ms]: connect 102.5
Connection length [replies/conn]: 9.995
Request rate: 99.9 req/s (10.0 ms/req)
Request size [B]: 89.0
Reply rate [replies/s]: min 34.2 avg 99.9 max 223.8 stddev 18.9 (99 samples)
Reply time [ms]: response 28.7 transfer 9.4
Reply size [B]: header 266.0 content 75975.0 footer 0.0 (total 76241.0)
Reply status: 1xx=0 2xx=49966 3xx=0 4xx=0 5xx=0
CPU time [s]: user 156.91 system 308.77 (user 31.4% system 61.8% total 93.1%)
Net I/O: 7449.6 KB/s (61.0*10^6 bps)
Errors: total 4 client-timo 4 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
this is @ access of a verbose php page (<?php phpinfo (); ?>), not reusing SSL session IDs, and hence renegotiating ...
if i switch to a 'lightweight' html page as --uri taget, i'm seeing request rates ~ 400 req/s.
with the 'old' apache config i'd had in place, i couldn't even get 10 req/s consistently ...
note that, atm, this is testing from the linode itself ... so performance is tainted by the load of the testing executable itself.
now, to figure out how the rates i _am_ seeing compare to 'norms' ... for linodes (somebody 'in her' has to have checked at some point ...), and drupal in general.
oh, and, also did some mysql tweaking:
put mysql /tmp in tmpfs (@ /etc/fstab)
Code:
tmpfs /tmp/mysqltmp tmpfs rw,gid=105,uid=105,size=128M,nosuid,nodev,noexec,nr_inodes=10k,mode=0700 0 0
monkeyed with (and continue to ...) cache, query, thread & buffer sizes
increased thread_concurrency from 2-> 8 (i.e., 2x # of CPUs)
all together, a flash-heavy, dynamic php page that had taken ~ 15 secs to load is coming up in ~1-2 secs now ...
still more room to improve, i suspect ....