Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
 Post subject:
PostPosted: Mon Aug 15, 2011 7:10 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
the problem is that if I lower the keepalive no one will use apache since most mobile phones needs more than 2 or 3 seconds to process a second request or to process a single request.

I think that this suggestion in this case doesn't work.


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 9:38 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
KeepaliveTimeout just limits the length of time a connection may be held open after a request is finished. It has no effect on a client making a single request. The limit to how long an active request may take is different, and is usually on the order of minutes.

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Mon Aug 15, 2011 10:26 pm 
Offline
Senior Member

Joined: Thu May 21, 2009 3:19 am
Posts: 336
This might be useful:
http://httpd.apache.org/docs/2.0/mod/co ... ivetimeout

If someone is done making the request, you want Apache to free up those resources as quickly as possible.


Top
   
 Post subject:
PostPosted: Tue Aug 16, 2011 7:01 am 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
I noticed that if I set the keepalive parameter at 15/20 a cell phone can make a second connection in an instand without the need of creating a second socket.

If I set this paramter to zero, I need to open another socket to make a second connection.

How can you explain this?


Top
   
 Post subject:
PostPosted: Tue Aug 16, 2011 7:12 am 
Offline
Senior Member

Joined: Thu May 21, 2009 3:19 am
Posts: 336
sblantipodi wrote:
I noticed that if I set the keepalive parameter at 15/20 a cell phone can make a second connection in an instand without the need of creating a second socket.

If I set this paramter to zero, I need to open another socket to make a second connection.

How can you explain this?


Do you mean connection instead of socket?

Quote:
The number of seconds Apache will wait for a subsequent request before closing the connection. Once a request has been received, the timeout value specified by the Timeout directive applies.


If you mean connection, yes, that's how keepalive works, it keeps that connection persistent. However, once that phone is done, your server, then has to wait 15-20 seconds to free up the connection and resources that phone has been using before freeing them up. You either need to reduce your keepalivetimeout and/or maxclients, switch to a different webserver (nginx or lighttpd) or a different apache/php configuration or get a bigger server.

Quote:
Setting KeepAliveTimeout to a high value may cause performance problems in heavily loaded servers. The higher the timeout, the more server processes will be kept occupied waiting on connections with idle clients.


Aren't you the guy who's saying you run 64-bit on a 512, because "it's cool" and you "don't need the RAM"?

It really does sound like you do need the RAM. :)


Top
   
 Post subject:
PostPosted: Tue Aug 16, 2011 4:17 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
waldo wrote:
If you mean connection, yes, that's how keepalive works, it keeps that connection persistent. However, once that phone is done, your server, then has to wait 15-20 seconds to free up the connection and resources that phone has been using before freeing them up. (...)

This shouldn't be the case if the phone is really done, since it will close the connection and immediately free it up. What the server side timeout does is prevent clients from connecting, keeping the session persistent, but then not actually making any further requests in that time frame. It also protects against persistent sessions that do not close properly (client turned off, packets lost, etc...).

But for clients that are actually going to make more than one request, permitting them to re--use the existing connection is much more efficient, so I wouldn't set the timeout too low and block that.

So it's not like every connection will require the keepalive timeout before it can get reused, it just sets an upper limit. Of course it's still a trade off since some fraction may incur that "wasted" time so you need to balance it against your resources. Or, as already suggested elsewhere, use a separate front daemon like nginx, that is lower overhead per connection to assist in managing the uncertainty there.

-- David


Top
   
 Post subject:
PostPosted: Tue Aug 16, 2011 4:19 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
I'm good for now, I will see in the future,
thanks for the answers :)


Top
   
 Post subject:
PostPosted: Tue Aug 16, 2011 11:10 pm 
Offline
Senior Member

Joined: Fri Jan 09, 2009 5:32 pm
Posts: 634
hybinet wrote:
What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit?


Your market analogy is excellent! Well put.


Top
   
 Post subject:
PostPosted: Wed Aug 17, 2011 1:58 am 
Offline
Senior Member

Joined: Fri May 02, 2008 8:44 pm
Posts: 1121
glg wrote:
Your market analogy is excellent! Well put.


Thanks, I think I've been hanging out in r/ELI5 a little too much lately.


Top
   
 Post subject:
PostPosted: Wed Aug 17, 2011 5:06 am 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
glg wrote:
hybinet wrote:
What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit?


Your market analogy is excellent! Well put.


I can't see the problem.
If a customer has 5 credit cards over the limit it will take a "client" but there are other 29 cash desk.


Top
   
 Post subject:
PostPosted: Wed Aug 17, 2011 11:50 am 
Offline
Senior Member

Joined: Fri Jan 09, 2009 5:32 pm
Posts: 634
sblantipodi wrote:
glg wrote:
hybinet wrote:
What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit?


Your market analogy is excellent! Well put.


I can't see the problem.
If a customer has 5 credit cards over the limit it will take a "client" but there are other 29 cash desk.


His example was only one line. Yes, you have 30 lines (MaxClients), but also potentially hundreds waiting in line. If you have 20-25 of those lines fumbling with their credit cards or worse "can I write a check?" (Keepalive timeout too high), then suddenly only a handful of your lines are moving.


Top
   
 Post subject:
PostPosted: Wed Aug 17, 2011 12:27 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
glg wrote:
sblantipodi wrote:
glg wrote:
hybinet wrote:
What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit?


Your market analogy is excellent! Well put.


I can't see the problem.
If a customer has 5 credit cards over the limit it will take a "client" but there are other 29 cash desk.


His example was only one line. Yes, you have 30 lines (MaxClients), but also potentially hundreds waiting in line. If you have 20-25 of those lines fumbling with their credit cards or worse "can I write a check?" (Keepalive timeout too high), then suddenly only a handful of your lines are moving.


be real... :)


Top
   
 Post subject:
PostPosted: Wed Aug 17, 2011 2:28 pm 
Offline
Senior Member

Joined: Fri May 02, 2008 8:44 pm
Posts: 1121
sblantipodi wrote:
glg wrote:
His example was only one line. Yes, you have 30 lines (MaxClients), but also potentially hundreds waiting in line. If you have 20-25 of those lines fumbling with their credit cards or worse "can I write a check?" (Keepalive timeout too high), then suddenly only a handful of your lines are moving.

be real... :)


Every analogy breaks down at some point...

In real life, it's unlikely that every customer in every cashier will try to use an expired credit card or offer to write a check. But in computing, if you allow something to happen, it will happen sooner or later. Especially if the reason you're allowing it in the first place is to accommodate clients who actually need it badly.

How long does it take for a mobile client to open a connection, make the first request, receive the first response, process it, make the second request, receive the second response, and finally close the connection? Let's be generous and say 20 seconds. If so, each and every client is holding up a line for 20 seconds, regardless of how long it actually takes for the server to process their requests. Every single client walks up to the cashier, puts down a bunch of stuff, realizes that it forgot the milk, and tells the cashier to wait while they get milk! Unfortunately, Apache with mpm_prefork isn't smart enough to let another client through while the first client is getting milk. That's what nginx is for.

If your setup works fine with MaxClients 30, it's only because there are never more than 30 clients trying to connect in any 20-second interval. If you sell enough apps to get 31 clients in a 20-second interval, the 31st client will have to wait 20 seconds before it can even make the first request, because all of the 30 lines are being held up by milk-forgetters. Sooner or later, you'll end up with a client that needs to wait 40 seconds. But not many clients will wait 40 seconds. They'll just timeout, making it look like your site is down.

This doesn't need to be fixed right away, but it's worth remembering if you expect more clients in the future. Switching to nginx often gives you an incredible speed boost, simply because nginx manages client connections much more efficiently than Apache's old-fashioned mpm_prefork. nginx is very smart. If a client so much as fumbles with one credit card, nginx will process a couple of other clients in the meantime.

KeepAlive is safe to use with nginx, but not with Apache.

Edit: remember the milk reference.


Last edited by hybinet on Wed Aug 17, 2011 4:11 pm, edited 1 time in total.

Top
   
 Post subject:
PostPosted: Wed Aug 17, 2011 2:37 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:32 pm
Posts: 737
Location: Italy
hybinet wrote:
Every analogy breaks down at some point...

In real life, it's unlikely that every customer in every cashier will try to use an expired credit card or offer to write a check. But in computing, if you allow something to happen, it will happen sooner or later. Especially if the reason you're allowing it in the first place is to accommodate clients who actually need it badly.

How long does it take for a mobile client to open a connection, make the first request, receive the first response, process it, make the second request, receive the second response, and finally close the connection? Let's be generous and say 20 seconds. If so, each and every client is holding up a line for 20 seconds, regardless of how long it actually takes for the server to process their requests. (Unfortunately, Apache with mpm_prefork isn't smart enough to let another client through while a line is being held up. That's what nginx is for.)

If your setup works fine with MaxClients 30, it's only because there are never more than 30 clients trying to connect in any 20-second interval. If you sell enough apps to get 31 clients in a 20-second interval, the 31st client will have to wait 20 seconds before it can even make the first request, because all of the 30 lines are being held up. Sooner or later, you'll end up with a client that needs to wait 40 seconds. But not many clients will wait 40 seconds. They'll just timeout, making it look like your site is down.

This doesn't need to be fixed right away, but it's worth remembering if you expect more clients in the future. Switching to nginx often gives you an incredible speed boost, simply because nginx manages client connections much more efficiently than Apache's old-fashioned mpm_prefork. KeepAlive is safe to use with nginx, but not with Apache.


I really like the answer, thanks for it.
I don't understand why a huge good old software like apache has no support to smarter "manager" like the small nginx.

The solution is: "switch to nginx".
The answer is: "I don't have time for nginx until I will need it".

but thanks, now I know where to work if I will ever need it.

For now I never seen apache complaining that I goes over 30clients, if apache will ever complain about it I will switch to nginx.

For now it doesn't have sense to lower the keepalive because doing it will create problems to many users only for a tought than one day ONE users can wait 40 seconds.

For now it works and as a wise says, don't fix it if it isn't broken :D

Thanks to all guys ;)


Top
   
 Post subject:
PostPosted: Wed Aug 17, 2011 4:05 pm 
Offline
Senior Member

Joined: Fri May 02, 2008 8:44 pm
Posts: 1121
sblantipodi wrote:
I don't understand why a huge good old software like apache has no support to smarter "manager" like the small nginx.


In fact, recent versions of Apache support several much better process managers, such as mpm_worker and mpm_event. The problem is PHP, because mod_php forces you to use the inefficient and outdated mpm_prefork. PHP was developed in the heyday of mpm_prefork and never got beyond it. This causes Apache to behave like a 10-year-old piece of junk. In fact, Apache without PHP can be as fast as any other modern web server. Lots of people use Apache with Django, Rails, or Tomcat with excellent results.

There are ways to deploy PHP with mpm_worker, but this involves FastCGI (FPM). For historical reasons, Apache has two competing FastCGI modules (mod_fastcgi and mod_fcgid), neither of which gets it quite right, and both of which are a pain in the ass to configure. Newer web servers such as nginx and lighttpd, by contrast, come with much better FastCGI support by default. As a result, people who need FastCGI flock to nginx, and PHP deployments tend to polarize with Apache+mpm_prefork+mod_php on the one side (for low loads) and nginx+FPM on the other (for high loads).


Last edited by hybinet on Wed Aug 17, 2011 4:11 pm, edited 2 times in total.

Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group