eranb22 wrote:
But what about the RAM issue?
ok, with dynamic content, apache is faster, but it eats alot of RAM.
I still think it depends. Is apache's footprint somewhat larger than nginx's for example? Sure. Does that automatically have to be a problem or preclude its use? No.
Now, if you find yourself with an apache config using the mpm_prefork worker with MaxClients set to 150, can that quickly thrash a Linode 360 under load? Absolutely. But if apache is giving you useful functionality, the answer may just be to adjust the configuration rather than automatically discard apache as an option.
For example, on my nginx+apache Linode, here's a current snapshot of memory usage:
Code:
~$ ps -C apache2 -C nginx -o %mem,vsz,sz,rss,args
%MEM VSZ SZ RSS COMMAND
0.1 10472 2618 416 /usr/sbin/apache2 -k start
0.1 10244 2561 416 /usr/sbin/apache2 -k start
0.3 68020 17005 1352 /usr/sbin/apache2 -k start
0.3 68020 17005 1284 /usr/sbin/apache2 -k start
0.1 4544 1136 684 nginx: master process /usr/local/sbin/nginx
0.3 4820 1205 1336 nginx: worker process
0.2 4688 1172 1072 nginx: worker process
Apache's larger virtual size is in large part a much larger collection of shared libraries (32 vs. 9 for nginx), which themselves would be shared between the apache processes, and much of which I will never even invoke given my configuration. It's current resident footprint isn't that much worse in my case.
Now, this is just a snapshot and there are tons of variables, so I make no claims that this is representative of anything beyond my own system. But it's certainly an existing example of a modest apache configuration.
Here's the key though - this is a fairly stock nginx configuration (2 workers), and it should see modest growth under load. But if I were using a more default apache configuration, with MaxClients 150 for example, it might end up with 10-12 processes (mpm_worker) or even 100+ (mpm_prefork) under load. So it's not the base overhead but the result of multiplying that overhead by the configuration.
Those default configurations are fine for dedicated servers with GBs of memory, but nowhere near appropriate for resource constrained VPSes. But you may not notice during initial testing when there just aren't enough simultaneous requests to push past the default initial process count.
In my case, I know that the ratio of dynamic content I'm serving through apache is trivial compared to static content, so my apache configuration is quite small, and the working set shouldn't get much above this - it certainly won't burn more than an extra process or two at most.
Now, I don't currently have something like mod_php loaded in my apache, so anyone else's apache footprint might be different, and likely somewhat larger. But then again, serving php via nginx is going to be larger too given the need for the external fcgi php process.
This isn't to say that nginx doesn't have some clear advantages over apache in constrained environments. It's my preferred server on all of my Linodes. But when comparing apples to apples I think that apache does get a little bit of a bum rap more due to its average default configuration, and I'm certainly willing to use it if I want a feature it has.
-- David