Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Mon Dec 01, 2014 6:30 pm 
Offline
Senior Newbie

Joined: Tue Jun 07, 2011 5:25 pm
Posts: 14
If I am not mistaken, you have not described the web server setup. If this is a dedicated DB VPS, then the web server load would not be relevant.

If this was my situation, I would reduce max-connections to 150 or maybe event 100 and see what happens. That will reduce the memory pressure from MySql. It is possible that due to the constrained memory, it is actually the web server, assuming it is on the same VPS, that is causing the swapping.

What I find interesting that both mysqltuner and mysqlreport both show a value of 52 for threads/connections. This implies to me that upstream you have a limit of 52 requests being passed to MySQL at any point in time.


Top
   
PostPosted: Mon Dec 01, 2014 8:17 pm 
Offline
Senior Member

Joined: Sun Mar 07, 2010 7:47 pm
Posts: 1970
Website: http://www.rwky.net
Location: Earth
A load average of 1 isn't a problem, you can have a load average of 30 and still have a respnsive system, high load just means it's waiting for something, ideally you shoud keep load below the number of cores for a mysql server so in the case of a 2048 that'd be 2.

So by problems I mean when it starts swapping.

If you've removed the database restart mysql (or reboot) and it'll clear the buffer pool.

_________________
Paid support
How to ask for help
1. Give details of your problem
2. Post any errors
3. Post relevant logs.
4. Don't hide details i.e. your domain, it just makes things harder
5. Be polite or you'll be eaten by a grue


Top
   
PostPosted: Tue Dec 02, 2014 8:06 am 
Offline
Senior Member

Joined: Tue Oct 20, 2009 9:02 am
Posts: 56
Hi both,

since removing the extra database from the mysql instance, mysqltuner said I hade 1.9GB of data and ideally the innodb_buffer_pool should be set to or above that number (which in itself is really weird if you ask me since that in effect would make it an in-memory db). Long story short, I resized my linode to a 4096 and set the innodb_buffer_pool to 2,4GB i.e. 500MB above what mysqltuner said the data size was.

The result so far: MySQL ate 2,1GB for the buffer pool almost instantaneously after reboot and is slowly (my traffic hours haven't started yet as the americas is just now waking up) eating more and more memory. That said I don't know if it will stop or what will happen.

Free, right now, gives this:

total used free shared buffers cached
Mem: 4002 3450 552 0 69 458
-/+ buffers/cache: 2922 1080
Swap: 255 0 255

ps -aux gives this:

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mysql 5083 21.5 70.7 3714472 2898088 ? Sl 06:59 78:18 /usr/sbin/mysql

So, 70% of the memory (and counting) is now eaten by MySQL.

As for webserver. I'm having the webserver on another Linode altogether so this is a dedicated MySQL. I know I'm sounding like a broken record here but this setup - with the same data!!! - worked beautifully on an old-style Linode 1024. The irony...

Anyway, I'm considering moving back to MyISAM if this move to 4096 isn't enough.

About setting the MAX open connections, I don't know - why would that make a difference? The MAX isn't the same as the actual, is it? As far as I can see, the actuals has never been higher than 100 or thereabouts, so quite in the vicinity of what you suggested.

In any case, I'm very happy that you're still trying to help out. It's much appreciated.


Top
   
PostPosted: Wed Dec 03, 2014 2:37 am 
Offline
Senior Member

Joined: Tue Oct 20, 2009 9:02 am
Posts: 56
Just for completeness, in case someone else is looking at this thread in the future with the same kind of problems.

The server didn't experience any problems this time i.e. 4096 seems to do the trick, at least after 24 hours. Time will tell if it holds.
So what I ended up with was an innodb_buffer_pool that has a size that is a little bit bigger than the size of the data in the whole database, leaving 1.6 GB for MySQL to operate, using 4 virtual CPU cores.

The traffic to the server has been peaking at 100 queries/sec for a couple of hours today, with a 24 hour average of 53 queries/second.
I'm not sure how well it would hold for higher traffic than that and your mileage will of course vary with the kind of queries you have, using index etc.

A couple of things that is quite amazing in all of this is how little _certain_ information there is available on this (how to configure mysql properly). Sure, there's the MySQL documentation but honestly who can really say they can make heads or tails out of that? Also, it feels like it was written 10 years ago. I get a feeling that MySQL is on the skids. It may very well be the case that it's time to look elsewhere for storage options.


Top
   
PostPosted: Wed Dec 03, 2014 10:22 am 
Offline
Senior Member

Joined: Thu Feb 20, 2014 5:06 pm
Posts: 58
adergaard wrote:
A couple of things that is quite amazing in all of this is how little _certain_ information there is available on this (how to configure mysql properly). Sure, there's the MySQL documentation but honestly who can really say they can make heads or tails out of that?


I highly recommend the book "High Performance MySQL". It fills in a lot of the holes in the documentation when it comes to performance tuning.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group