sblantipodi wrote:
I am trying to understand why max clients should be set as low as you suggest with such a powerful toy like a linode 512.
I know that someone is using more "resource intensive" script but I think that its better to work on the script instead of lowering that parameter too much.
The way Apache and PHP are typically deployed together is rather unusual. Instead of having a separate set of PHP interpreters to handle requests that need it, the PHP interpreter is embedded into the web server itself (as mod_php). This makes installation quite a bit easier, but there are two very big downsides.
First, PHP does not handle multithreading very well. This means that Apache needs to have a separate process for each request, instead of just being able to instantiate a thread. This is heavy, and means that the number of simultaneous requests must be set lower than you would with other setups.
Secondly, because the nature of the request is not known until after it is accepted,
every process must be prepared for anything. This means, at a minimum, a PHP interpreter, along with any libraries that get loaded over its lifetime. This makes things quite heavy, especially when frameworks or heavy applications are involved. If you have, say, Drupal
and WordPress, you get twice the whammy, since it doesn't unload everything between requests.
The "stereotypical" Apache+PHP problem is running out of memory because the default MaxClients is 150. Traffic gets heavier than usual for a moment, the server starts swapping, requests take longer to process, and Apache reacts to this by spawning more processes. MaxClients is a safety valve, and setting it very low will immediately stop the bleeding. You can increase it, of course, as your situation allows.
Quote:
The point is that I cannot understand why generally linux users suggest to lower that parameter too much.
Some years ago people do magicians with 128MB of RAM or less, why can't we manage more than 15 clients with 512MB of ram?
The applications we run have become larger over time, and since the "default" is to integrate PHP into Apache, this has had a direct effect on the amount of RAM required per simultaneous connection. We also have more objects on each page load -- I just counted 24 on one of the sites $EMPLOYER has, ranging from jQuery to video thumbnails to stylesheets to ads. So, we have more RAM, but we've found new, innovative ways to use it.
Now, a really good question for the history department: why did we go to mod_php in the first place? In The Beginning, when computers were physically large, relatively rare, and slow, we did dynamic content by configuring the web server to spawn a process and run a script. At the end of the request, the script would terminate and, ta-da, everything it printed would be returned to the user. This was fine from the web server's standpoint, but... well, it's slow, even on today's equipment. I timed it, and it took 6.8 seconds to handle a relatively simple view of the above-mentioned site on my workstation. Sure, it only took 0.9 seconds the second time (hooray for caching), but it only takes 350 ms to do this same request against the production web server, and at least 42 ms of that is network delay.
So, the trend was to stuff interpreters into the web server. This was a pretty clever idea, since it doesn't involve any operational changes: there's no additional daemons to run, and the web server can still do what it always did, except instead of spawning /usr/bin/php when it sees a .php file, it can just pass it off to its built-in PHP interpreter. Downside is that it now has a built-in PHP interpreter, which it has to carry around like a millstone when handling
any request, no matter how trivial.
Today, of course, the way to handle boatloads of traffic is to take a little bit from both approaches. With something like FastCGI, the web server does not have a built-in PHP interpreter; instead, when it encounters a .php file, it proxies the request to
another server, which
does have a built-in PHP interpreter. In your situation, you wouldn't have a bulky PHP interpreter sitting around idle while someone's smartphone downloads a 1 MB file over SlothWireless's ⅓G network, or while a browser keeps an idle network connection open in case the user requests another page (this is what a keepalive is, basically).
Somewhat like
zombo.com, you can do anything with 512 MB of RAM, anything at all. The only limit is the resources required per request.
_________________
Code:
/* TODO: need to add signature to posts */