Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Fri Jan 07, 2011 12:08 pm 
Offline
Senior Newbie

Joined: Tue May 04, 2010 4:38 pm
Posts: 15
Website: http://telarapedia.com
Does anyone have some ideas what I should look at?

http://telarapedia.com is the site and there is a server-status off the main domain if you are curious.

I've installed iotop to see if it could help but get this error:
Quote:

Could not run iotop as some of the requirements are not met:
- Python >= 2.5 for AF_NETLINK support: Found
- Linux >= 2.6.20 with I/O accounting support: Not found


I am on a 512 VPS with 90 extra purchased short-term as well. And here is what free shows:
Code:
             total       used       free     shared    buffers     cached
Mem:        616672     595208      21464          0      11332     182236
-/+ buffers/cache:     401640     215032
Swap:       262136        136     262000


Edit: Here is some sql tuner script output (reloaded mysql about 10 min before, though):
Code:
perl mysqltuner.pl 

 >>  MySQLTuner 1.0.1 - Major Hayden <major@mhtx.net>
 >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/
 >>  Run with '--help' for additional options and output filtering
Please enter your MySQL administrative login: root
Please enter your MySQL administrative password:

-------- General Statistics --------------------------------------------------
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.0.75-0ubuntu10.5-log
[OK] Operating on 32-bit architecture with less than 2GB RAM

-------- Storage Engine Statistics -------------------------------------------
[--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[--] Data in MyISAM tables: 15M (Tables: 216)
[--] Data in InnoDB tables: 64M (Tables: 167)
[--] Data in MEMORY tables: 0B (Tables: 3)
[!!] Total fragmented tables: 22

-------- Performance Metrics -------------------------------------------------
[--] Up for: 3m 53s (11K q [47.798 qps], 644 conn, TX: 12M, RX: 1M)
[--] Reads / Writes: 100% / 0%
[--] Total buffers: 66.0M global + 2.6M per thread (100 max threads)
[OK] Maximum possible memory usage: 328.5M (54% of installed RAM)
[OK] Slow queries: 0% (0/11K)
[OK] Highest usage of available connections: 4% (4/100)
[OK] Key buffer size / total MyISAM indexes: 16.0M/5.4M
[OK] Key buffer hit rate: 97.3% (620 cached / 17 reads)
[OK] Query cache efficiency: 35.0% (3K cached / 8K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 2 sorts)
[OK] Temporary tables created on disk: 4% (21 on disk / 501 total)
[OK] Thread cache hit rate: 99% (4 created / 644 connections)
[!!] Table cache hit rate: 2% (64 open / 3K opened)
[OK] Open file limit used: 1% (15/1K)
[OK] Table locks acquired immediately: 100% (6K immediate / 6K locks)
[!!] InnoDB data size / buffer pool: 64.9M/8.0M

-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Enable the slow query log to troubleshoot bad queries
    Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
    table_cache (> 64)
    innodb_buffer_pool_size (>= 64M)

The site has basically gone to being totally unusable. :([/code]


Top
   
 Post subject:
PostPosted: Fri Jan 07, 2011 7:35 pm 
Offline
Senior Newbie

Joined: Wed Dec 29, 2010 5:39 pm
Posts: 12
MediaWiki is pretty heavy on the RAM and CPU.

you can, however, enable some sort of cache. here's MediaWiki's official manual.

you can choose to deploy either file cache (easier and faster, though not for logged in users) and/or memcached (for busiest sites), but in any case remember to install APC: if you're Ubuntu or Debian, it's enough to do "apt-get install php-apc" and it works automagically.

you can also try giving nginx+php-fpm a shot - or try at least php-fpm with apache worker.

for the mysql part, you should change innodb_buffer_pool_size to at least the size of your InnoDB tables (which is 64.9 MB, so set it to 70M or something). increase table_cache too, as suggested by mysqltuner.

to decrease the size of your tables, run cron scripts who optimize them at least once a week (or even every day late at night) and MediaWiki's maintenance file compressOld.php (you can find docs on MediaWiki's wiki :D).

also, install htop to monitor the situation of your server.

that should do the trick ;)

_________________
nope


Top
   
 Post subject:
PostPosted: Sun Jan 09, 2011 9:06 am 
Offline
Senior Member

Joined: Sun Aug 02, 2009 1:32 pm
Posts: 222
Website: https://www.barkerjr.net
Location: Connecticut, USA
I would take a look at your extensions. It's possible one or more of them is slow. http://telarapedia.com/wiki/Special:Version


Top
   
 Post subject:
PostPosted: Fri Jan 21, 2011 5:29 pm 
Offline
Senior Newbie

Joined: Tue May 04, 2010 4:38 pm
Posts: 15
Website: http://telarapedia.com
Thanks for the help and sorry for the belated reply!

I already had APC installed, and the problem seemed to go away when I upgraded to a larger VPS. However, now I have another issue that when people hit 'save page' it is sometimes instantaneous and sometimes takes up to 30 seconds (which is obviously insane). I'd imagine this would be mysql related?

I also have NGINX installed (use it for some sites anyway), but too bad I can't see any good recommendations on tuning it for a busy Mediawiki site... :(


Top
   
 Post subject:
PostPosted: Sat Jan 22, 2011 8:09 pm 
Offline
Junior Member

Joined: Thu Jan 07, 2010 8:12 pm
Posts: 21
MediaWiki.org's performance tuning page and the associated guides (Aaron Schulz's and Illmari Karonen's are the most useful) have a whole bunch of things you can do.

BarkerJr is probably on to something. In particular, I think the Semantic MediaWiki extension tends to require a lot of things to be regenerated when edits are saved. Set up the job queue so that jobs get run by a separate process, rather than on every edit.

You should also take a look at whether your server's various components are getting the resources they need. As mejicat suggested, you should adjust MySQL's settings as mysqltuner suggests. You should also check whether APC has enough memory. There's an APC visualizer extension for MediaWiki that makes it easy to check.

If nothing else works, you can enable profiling for slow page loads to see where the bottleneck is. I don't think that'll be necessary, though.


Top
   
 Post subject:
PostPosted: Sat Jan 22, 2011 8:44 pm 
Offline
Senior Newbie

Joined: Wed Dec 29, 2010 5:39 pm
Posts: 12
yeah, by default MediaWiki runs a "job" each time a page is run, which is not really the best thing ever - especially if you have a lot of very-used templates. you should set a cron job for "php /path/to/mediawiki/maintenance/runJobs.php" every six hours or something, so the users loading pages don't have to wait while jobs are in execution. I personally set $wgJobRunRate to 0.01 (one job every 100 requests, reasonable enough) and run runJobs.php once every night to clear what's left in the queue.

as for nginx, there's really not much to setup except worker_processes to 4 in nginx.conf (you have 4 cpus). since nginx handles just the static stuff, i'd look into your php configuration instead - for example, set php.ini's memory_limit to 64 or even 32 instead of the default 128. if you're using php-fpm, check that the maximum number of spawned children is reasonable (i'd say 20 or so).

if you haven't done so already, you should give mediawiki's file cache a try. it really helps with the busiest sites since not-logged-in users are served static, pre-generated html files. if you can't (or don't want to) use file cache, memcached is a good option and it works for registered users too (I personally use both).

_________________
nope


Top
   
 Post subject:
PostPosted: Sun Jan 23, 2011 5:41 pm 
Offline
Senior Newbie

Joined: Tue May 04, 2010 4:38 pm
Posts: 15
Website: http://telarapedia.com
Thanks, guys.

I've changed the job queue thing and it's still slow most of the time on save. Sometimes so long that things time out, but often it's like 20 seconds. Also, I've done most of the tweaks recommended, but honestly the wiki is lightning fast for everything except saving. :(

Would memcache help with that? I'd need to go up from my current 1088MB VPS for that, obviously, and the wiki is fast in all other conditions so it seems a bit overkill?

Edit: It's so bad on saves now that people get 504 timeout errors from NGINX quite often.


Top
   
 Post subject:
PostPosted: Sun Jan 23, 2011 8:52 pm 
Offline
Junior Member

Joined: Thu Jan 07, 2010 8:12 pm
Posts: 21
Cio wrote:
Would memcache help with that? I'd need to go up from my current 1088MB VPS for that

No, you wouldn't.

Cio wrote:
Also, I've done most of the tweaks recommended

Really? You tried using php-fpm? You checked that APC wasn't getting full? You bumped up the MySQL settings as recommended? (You enabled profiling? :))


Top
   
 Post subject:
PostPosted: Mon Jan 24, 2011 1:06 am 
Offline
Senior Member

Joined: Sat Jun 05, 2004 12:49 am
Posts: 333
Cio wrote:
Thanks, guys.

I've changed the job queue thing and it's still slow most of the time on save. Sometimes so long that things time out, but often it's like 20 seconds. Also, I've done most of the tweaks recommended, but honestly the wiki is lightning fast for everything except saving. :(

Would memcache help with that? I'd need to go up from my current 1088MB VPS for that, obviously, and the wiki is fast in all other conditions so it seems a bit overkill?

Edit: It's so bad on saves now that people get 504 timeout errors from NGINX quite often.


Then you need to enable profiling and figure out where all the time is being spent.


Top
   
 Post subject:
PostPosted: Mon Jan 24, 2011 4:48 am 
Offline
Senior Newbie

Joined: Tue May 04, 2010 4:38 pm
Posts: 15
Website: http://telarapedia.com
Thanks, all. I'll try profiling tonight and see if I can wade through the cryptic page on setting it up. :)

APC is totally full, yes. I can bump that up (has 2 30 meg segments at the moment). I have made the mysql changes, though.

Using php5-cgi - I guess I should look at compiling and using php-fpm? (On Ubuntu 9.04 and there is no special package).


Top
   
 Post subject:
PostPosted: Mon Jan 24, 2011 9:03 am 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
As a heads up, your Ubuntu version is no longer supported as of October of last year. This means that you haven't received any security updates in about three months.

I would strongly advise you to either upgrade to 9.10 then to 10.04 LTS, or build a new instance on 10.04 LTS, before doing any substantial work on your system.

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Mon Jan 24, 2011 2:03 pm 
Offline
Senior Newbie

Joined: Tue May 04, 2010 4:38 pm
Posts: 15
Website: http://telarapedia.com
hoopycat wrote:
As a heads up, your Ubuntu version is no longer supported as of October of last year. This means that you haven't received any security updates in about three months.

I would strongly advise you to either upgrade to 9.10 then to 10.04 LTS, or build a new instance on 10.04 LTS, before doing any substantial work on your system.


Yes we are doing what you described. :) Going to get another linode on 10.04 LTS and setup php-fpm and all the other goodies properly from the start and then move the wiki over to the new linode and turn off the old one.

Has anyone used the file cache, too? I had it enabled but never setup the directories. Just to confirm, every time someone changes a page mediawiki will rebuild the file cache for that file? I have a feeling it hates NGINX as it isn't used even when I enable it and specify the directory (and give the webserver rights to it and 0777 it).


Top
   
 Post subject:
PostPosted: Mon Jan 24, 2011 5:29 pm 
Offline
Junior Member

Joined: Thu Jan 07, 2010 8:12 pm
Posts: 21
Cio wrote:
Just to confirm, every time someone changes a page mediawiki will rebuild the file cache for that file?

An edit to a page will invalidate the cache file for that page, but I don't think it will actually be recreated until a logged-out user views it. (So it's not going to make edit saves take longer, if you were afraid of that.)

Cio wrote:
I have a feeling it hates NGINX as it isn't used even when I enable it and specify the directory (and give the webserver rights to it and 0777 it).

Are you visiting it while logged out (it won't generate cache files until someone does)? What happens when you run rebuildFileCache.php?


Top
   
 Post subject:
PostPosted: Wed Jan 26, 2011 2:10 pm 
Offline
Senior Newbie

Joined: Wed Dec 29, 2010 5:39 pm
Posts: 12
yeah, it doesn't rebuild the cache right away - it waits for the next visitor to view the page.

also, in 1.16 (i think, it didn't do that before) it also invalidates the cache if a template used by the page is edited and stuff like that. so it's really the best option for CPU and it guarantees your pages are almost always up to date - only exception i can think of is if you edit the skin template, but it's enough to truncate the cache table and/or delete all the files in the cache directory - which you should set outside of the public, world-viewable directory by the way.

_________________
nope


Top
   
 Post subject:
PostPosted: Wed Jan 26, 2011 4:37 pm 
Offline
Senior Newbie

Joined: Tue May 04, 2010 4:38 pm
Posts: 15
Website: http://telarapedia.com
mejicat wrote:
yeah, it doesn't rebuild the cache right away - it waits for the next visitor to view the page.

also, in 1.16 (i think, it didn't do that before) it also invalidates the cache if a template used by the page is edited and stuff like that. so it's really the best option for CPU and it guarantees your pages are almost always up to date - only exception i can think of is if you edit the skin template, but it's enough to truncate the cache table and/or delete all the files in the cache directory - which you should set outside of the public, world-viewable directory by the way.

Yeah but I get like 20k uniques a day, most of which are not logged in, so it must be a different problem.

I'll try the rebuildfilecache command but my vps is smashed atm with lik 200% cpu usage so I want to wait till it isnt' going to burn up first. :D


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group