Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Sun Aug 14, 2011 8:01 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
As title.

I have a Linode 512 with light load, it is used to answer cell phones with some data, nothing heavy.
Sometimes the load may increase considerably.

In normal conditions this is the free -m output
free -m
total used free shared buffers cached
Mem: 424 335 89 0 33 166
-/+ buffers/cache: 134 290
Swap: 255 10 245


The VPS is running CentOS 6 with the latest paravirt kernel from Linode.

Services running:
- LAMP + phpMyAdmin
- Postfix, Dovecot, Squirrelmail
- Cacti for server monitoring (SNMP)
- fail2ban

How should I set this parameters?

<IfModule prefork.c>
StartServers 3
MinSpareServers 3
MaxSpareServers 6
ServerLimit 30
MaxClients 30
MaxRequestsPerChild 2000
</IfModule>


<IfModule worker.c>
StartServers 3
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>


Last edited by sblantipodi on Sun Aug 14, 2011 8:25 pm, edited 1 time in total.

Top
   
 Post subject:
PostPosted: Sun Aug 14, 2011 8:08 pm 
Offline
Senior Member

Joined: Fri May 02, 2008 8:44 pm
Posts: 1121
Oh, not again!:twisted:

Can't tell exactly, because we don't know what causes the load to "increase considerably". It could be RAM, it could be a rogue script that eats up CPU cycles, or it could be a disk-heavy operation.

But MaxClients 100 definitely seems too high for a server with PHP on it.

Try this:

ServerLimit 15
MaxClients 15

That should leave you with plenty of RAM for other things. If you still get slowdowns after that, RAM might not be the culprit.

The "worker.c" part doesn't apply, since you're using the prefork MPM.


Top
   
 Post subject:
PostPosted: Sun Aug 14, 2011 8:28 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
I edited my actual settings on the first post, the settings is:

<IfModule prefork.c>
StartServers 3
MinSpareServers 3
MaxSpareServers 6
ServerLimit 30
MaxClients 30
MaxRequestsPerChild 2000
</IfModule>

The load increase considerably is done by heavy load produced by customers that log in to ask for an update when the update is ready.

What about MaxRequestsPerChild ?


Top
   
 Post subject:
PostPosted: Sun Aug 14, 2011 9:03 pm 
Offline
Senior Member

Joined: Fri May 02, 2008 8:44 pm
Posts: 1121
sblantipodi wrote:
MaxRequestsPerChild ?


MaxRequestsPerChild usually doesn't matter unless there's a memory leak in PHP itself. PHP used to have annoying memory leaks in the past. But nowadays, you can usually set MaxRequestsPerChild to any reasonably high value (1000+) or even disable it (0) without any ill effect.

sblantipodi wrote:
The load increase considerably is done by heavy load produced by customers that log in to ask for an update when the update is ready.


Can you describe the "update" in some detail? How is it generated? PHP script accessing a database? Does it involve any image processing? What kind of symptoms does it cause when you have high load? Does the server become slower or completely inaccessible?

You said you had Cacti running on your server. What does the RAM, CPU, disk I/O, and load average look like when the server load increases? You can upload images to sites like imgur and post links here. A list of processes (the "top" command) would also help.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 8:43 am 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
hybinet wrote:
sblantipodi wrote:
MaxRequestsPerChild ?


MaxRequestsPerChild usually doesn't matter unless there's a memory leak in PHP itself. PHP used to have annoying memory leaks in the past. But nowadays, you can usually set MaxRequestsPerChild to any reasonably high value (1000+) or even disable it (0) without any ill effect.

sblantipodi wrote:
The load increase considerably is done by heavy load produced by customers that log in to ask for an update when the update is ready.


Can you describe the "update" in some detail? How is it generated? PHP script accessing a database? Does it involve any image processing? What kind of symptoms does it cause when you have high load? Does the server become slower or completely inaccessible?

You said you had Cacti running on your server. What does the RAM, CPU, disk I/O, and load average look like when the server load increases? You can upload images to sites like imgur and post links here. A list of processes (the "top" command) would also help.


we sell mobile software... when an updated version of the software is released our linode sends thousands of emails informing customers about the news.

the server complete the "sending mail" task in about 15 minutes.
during this heavy work load the server is accessible without problem, only some little slowdown.

generally every customers once received the email contact our server to download the updated version (just a 1MB download) and than starts a php/java script to check for license.

if I set the MaxClients to 20, when I restart apache after an upgrade apache informs me that he received more than 20 request but servers remains stable.
Sincerely I don't know what is a server crash since I'm on linode, never experienced one. Knock on wood as english people says.

I can tell that the request are light weighted since I managed 30 clients (in the same time) without any server issues, just a little bit of slow down but nothing to warry about.
This linode rocks.

I am trying to understand why max clients should be set as low as you suggest with such a powerful toy like a linode 512.
I know that someone is using more "resource intensive" script but I think that its better to work on the script instead of lowering that parameter too much.

The point is that I cannot understand why generally linux users suggest to lower that parameter too much.
Some years ago people do magicians with 128MB of RAM or less, why can't we manage more than 15 clients with 512MB of ram?

Obviously every case is different and I can't speak for everyone.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 10:11 am 
Offline
Senior Member

Joined: Fri Jan 09, 2009 5:32 pm
Posts: 634
what are your keepalive settings? if the timeout is too high, that can allow a user to keep a connection longer than they should. Some here recommend turning the off completely, I tend more towards setting the timeout to 1 or 2 seconds. Default is something like 15 seconds, which is too high.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 10:15 am 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
glg wrote:
what are your keepalive settings? if the timeout is too high, that can allow a user to keep a connection longer than they should. Some here recommend turning the off completely, I tend more towards setting the timeout to 1 or 2 seconds. Default is something like 15 seconds, which is too high.


I have 20 seconds, I need it to grant all mobile phones to live well with my server, unfortunantly not every country have good mobile network, america included.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 11:03 am 
Offline
Senior Member

Joined: Fri Jan 09, 2009 5:32 pm
Posts: 634
sblantipodi wrote:
glg wrote:
what are your keepalive settings? if the timeout is too high, that can allow a user to keep a connection longer than they should. Some here recommend turning the off completely, I tend more towards setting the timeout to 1 or 2 seconds. Default is something like 15 seconds, which is too high.


I have 20 seconds, I need it to grant all mobile phones to live well with my server, unfortunantly not every country have good mobile network, america included.


Keepalive is not a connection timeout, it's a timeout for how long a client can send additional requests on the same connection. You'll be better off turning that down or even off.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 11:20 am 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
glg wrote:
Keepalive is not a connection timeout, it's a timeout for how long a client can send additional requests on the same connection. You'll be better off turning that down or even off.


thanks for the suggestion, may I ask you why of this suggestion?
I would like to understand why of this tips.

Thanks.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 12:10 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
sblantipodi wrote:
I am trying to understand why max clients should be set as low as you suggest with such a powerful toy like a linode 512.
I know that someone is using more "resource intensive" script but I think that its better to work on the script instead of lowering that parameter too much.


The way Apache and PHP are typically deployed together is rather unusual. Instead of having a separate set of PHP interpreters to handle requests that need it, the PHP interpreter is embedded into the web server itself (as mod_php). This makes installation quite a bit easier, but there are two very big downsides.

First, PHP does not handle multithreading very well. This means that Apache needs to have a separate process for each request, instead of just being able to instantiate a thread. This is heavy, and means that the number of simultaneous requests must be set lower than you would with other setups.

Secondly, because the nature of the request is not known until after it is accepted, every process must be prepared for anything. This means, at a minimum, a PHP interpreter, along with any libraries that get loaded over its lifetime. This makes things quite heavy, especially when frameworks or heavy applications are involved. If you have, say, Drupal and WordPress, you get twice the whammy, since it doesn't unload everything between requests.

The "stereotypical" Apache+PHP problem is running out of memory because the default MaxClients is 150. Traffic gets heavier than usual for a moment, the server starts swapping, requests take longer to process, and Apache reacts to this by spawning more processes. MaxClients is a safety valve, and setting it very low will immediately stop the bleeding. You can increase it, of course, as your situation allows.

Quote:
The point is that I cannot understand why generally linux users suggest to lower that parameter too much.
Some years ago people do magicians with 128MB of RAM or less, why can't we manage more than 15 clients with 512MB of ram?


The applications we run have become larger over time, and since the "default" is to integrate PHP into Apache, this has had a direct effect on the amount of RAM required per simultaneous connection. We also have more objects on each page load -- I just counted 24 on one of the sites $EMPLOYER has, ranging from jQuery to video thumbnails to stylesheets to ads. So, we have more RAM, but we've found new, innovative ways to use it.

Now, a really good question for the history department: why did we go to mod_php in the first place? In The Beginning, when computers were physically large, relatively rare, and slow, we did dynamic content by configuring the web server to spawn a process and run a script. At the end of the request, the script would terminate and, ta-da, everything it printed would be returned to the user. This was fine from the web server's standpoint, but... well, it's slow, even on today's equipment. I timed it, and it took 6.8 seconds to handle a relatively simple view of the above-mentioned site on my workstation. Sure, it only took 0.9 seconds the second time (hooray for caching), but it only takes 350 ms to do this same request against the production web server, and at least 42 ms of that is network delay.

So, the trend was to stuff interpreters into the web server. This was a pretty clever idea, since it doesn't involve any operational changes: there's no additional daemons to run, and the web server can still do what it always did, except instead of spawning /usr/bin/php when it sees a .php file, it can just pass it off to its built-in PHP interpreter. Downside is that it now has a built-in PHP interpreter, which it has to carry around like a millstone when handling any request, no matter how trivial.

Today, of course, the way to handle boatloads of traffic is to take a little bit from both approaches. With something like FastCGI, the web server does not have a built-in PHP interpreter; instead, when it encounters a .php file, it proxies the request to another server, which does have a built-in PHP interpreter. In your situation, you wouldn't have a bulky PHP interpreter sitting around idle while someone's smartphone downloads a 1 MB file over SlothWireless's ⅓G network, or while a browser keeps an idle network connection open in case the user requests another page (this is what a keepalive is, basically).

Somewhat like zombo.com, you can do anything with 512 MB of RAM, anything at all. The only limit is the resources required per request.

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 12:37 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
it make sense but I need that idle because making up connection on cell phones require more time than transferring 200KB on GPRS.
opening a new connection on every request isn't good, in this case.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 2:03 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
If you stay with Apache+mpm-prefork+mod_php for handling all HTTP requests, you will need to balance the performance benefits of persistent connections vs. the ability to handle more requests per second. There's no right answer.

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 2:32 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
Ok, I think I'm good with 30 max clients.
Never had problem in this way, just searching for better tweaking.

Probably things will go also better when I will boot with the latest Kernel 3.0, since I'm using the 2.6.39.1 that has some memory problem with 64bit.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 3:08 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
With the extra RAM, you might be able to bump it up to 31 or maybe even 32. :-)

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 7:01 pm 
Offline
Senior Member

Joined: Fri May 02, 2008 8:44 pm
Posts: 1121
sblantipodi wrote:
glg wrote:
Keepalive is not a connection timeout, it's a timeout for how long a client can send additional requests on the same connection. You'll be better off turning that down or even off.

thanks for the suggestion, may I ask you why of this suggestion?
I would like to understand why of this tips.

Lowering or disabling KeepAlive in Apache is very important when you also have a low MaxClients setting. Here's why:

When you have a high MaxClients setting, your server tries to process a lot of clients at the same time. This causes a load spike, because too many things are happening at the same time. As a result, all of the clients experience a serious slowdown. Imagine a chaotic market where everyone tries to buy the same thing at the same time. The stampede would crush the seller, and only a few people would get what they wanted. Not good!

On the other hand, when you have a low MaxClients setting, your server tries to process a few clients at a time, and tells other clients to wait in line like good ol' Japanese gents until it's their turn. There is no load spike on the server, so each client gets served very quickly, and the line also moves very quickly. In fact, you are able to serve even more clients per unit time this way, because the whole process is so orderly and the server is humming along at a more fuel-efficient RPM.

But the success of the "please wait in line" approach depends on how quickly you can serve each client. What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit? The entire line behind him must wait longer. This is exactly what happens with KeepAlive. A client who opens a persistent connection holds up the line for 20-30 seconds just in case he might need to send another request. Now, if everyone does this, the system becomes extremely inefficient. Therefore, when you have a low MaxClients setting, you must also have a low KeepAlive setting.

But don't despair, there's still hope.

If your application really needs long-lasting connections, consider putting nginx in front of Apache as a reverse proxy. nginx is a lightweight web server that was specifically designed to handle tens of thousands of connections using only a tiny amount of server resources. Give nginx a generous KeepAlive setting, let it handle all the client connections, and disable KeepAlive on the Apache side. The connection between nginx and Apache is local, so the lack of KeepAlive doesn't matter there. In fact, this is exactly how many of your "magicians" manage to pump out an insane amount of hits on very small servers.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group