Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
 Post subject:
PostPosted: Wed May 18, 2011 4:40 pm 
Offline
Junior Member

Joined: Wed May 11, 2011 7:13 am
Posts: 32
The php memory_limit was set rather high for one of my scripts. But lowering it didn't fix the memory problem.

It's like I can't find the leak in my boat.. :(


Top
   
 Post subject:
PostPosted: Wed May 18, 2011 5:07 pm 
Offline
Senior Member
User avatar

Joined: Sun Dec 27, 2009 11:12 pm
Posts: 1038
Location: Colorado, USA
Have you tried rubbing a soapy water solution all over the server and seeing where the bubbles are coming out?


Top
   
 Post subject:
PostPosted: Wed May 18, 2011 5:11 pm 
Offline
Junior Member

Joined: Wed May 11, 2011 7:13 am
Posts: 32
vonskippy wrote:
Have you tried rubbing a soapy water solution all over the server and seeing where the bubbles are coming out?
I'll open a support request for that.. ;)


Top
   
 Post subject:
PostPosted: Wed May 18, 2011 6:00 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
Cheek wrote:
It's like I can't find the leak in my boat.. :(

Well, it's not totally identified, but it's certainly localized.

From what I'm seeing, you can eat up all your memory with barely any simultaneous requests, and that memory is going to apache. It seems to me that points squarely at the application stack servicing those web requests, which is Drupal, right?

You probably need to follow-up with those more expert with Drupal and its modules, in terms of situations in which that much memory may be required with the modules you are using. Either it's working as designed (and it just needs a lot of memory to generate whatever pages your site uses), or something is leaking during processing. It could be something as simple as a poorly written database query that ends up pulling a bunch of data back from the database (keeping it in memory) to generate a page.

Not sure if there's enough local Drupal expertise here (definitely not in my case) or not for that.

BTW, in your shoes I'd just set MaxRequestsPerChild to 1 for now so any resources are reclaimed after a single request, with no possibility for additive growth over a series of requests. As it stands now you're letting a single process potentially get 28x larger than it would if you had that setting at 1.

Also, given how much less frequent the issue is under your current settings, at this point growing your Linode (even if temporarily) might in fact give you a decent amount of breathing room, plus some additional data in terms of whether you can really fill that memory, or if now you can see the right peak usage, which maybe you'll be lucky and is just be slightly higher than your current Linode.

-- David


Top
   
 Post subject:
PostPosted: Wed May 18, 2011 7:55 pm 
Offline
Junior Member

Joined: Wed May 11, 2011 7:13 am
Posts: 32
db3l wrote:
BTW, in your shoes I'd just set MaxRequestsPerChild to 1 for now so any resources are reclaimed after a single request, with no possibility for additive growth over a series of requests. As it stands now you're letting a single process potentially get 28x larger than it would if you had that setting at 1.

Somehow, setting the MaxRequestsPerChild to 1 doesn't help. I can fill the memory with just 1 MaxRequestsPerChild.

It seems MaxClients has more effect. I'm now using these settings:
Code:
    StartServers          1
    MinSpareServers       1
    MaxSpareServers       1
    MaxClients            4
    MaxRequestsPerChild   4

(I started with MaxClients 4 + MaxRequestsPerChild 1 and went from there)


Looks like these settings (or lower) are the only settings that aren't crashable. The time to load images however, gets pretty awful. Is this the point to upgrade?

Keep in mind I'm using some extreme scenarios to test the server. But as you said, a properly configured server shouldn't run out of memory right?


Top
   
 Post subject:
PostPosted: Wed May 18, 2011 8:22 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
Quote:
Keep in mind I'm using some extreme scenarios to test the server. But as you said, a properly configured server shouldn't run out of memory right?

Yes. Well, to be fair, unless the minimum requirements of the application stack truly cannot fit in the available resources. But a Linode 512 really ought to be able to handle this, even with a resource hungry stack like Drupal, at worst with reduced performance, but without crashing.

Wow. The only conclusion I can come to tell you're saying that a single request needs at least a quarter of your available memory (sans non-apache processes). Assuming that's in the 100MB range I guess that's not impossible, but does seems excessive.

Darn, this isn't getting any easier for you, is it? :-(

In your shoes, I suppose my last scenario would be MaxClients and MaxRequestsPerChild of 1 each. That lets a single request into your host at a time. If you can crash things that way, you know you literally don't have enough resource for your application stack, and barring fixing a problem there, simply have to grow your Linode. It may be that a Linode 1024 would be fine, or it may be that the same URL will just eat through whatever you give it (if it's a bug). Only way to know would be to test.

The fact that it worked for so long probably implies an existing behavior (potentially a bug) that is now being tickled by differences (or growth) in your data, or maybe some change (a module upgrade?).

I guess the silver lining, if there's any in this scenario, is that if a single request is capable of doing this, if you could figure out what one (or ones) it is, you should be able to narrow your focus a lot.

Is there any way for you to get information from whatever stress tool you've settled on as to what URLs it requests in what order? If you could find the first one that started failing (timeout or whatever) it might point you at somewhere to look, or ask questions in a Drupal group about.

I think it would also be worth your time to clone your current Linode onto a larger size (like a 1024 or larger), and then run the same test against that. If it survives, at least you know you have the option of spending your way out of this short term pending any other analysis. If not, at least you know that as well.

-- David

PS: For images, I'd add something like nginx for static content on the front end, proxying dynamic requests back to apache, independent of system size. The latency is because even simple static requests have to wait for a free apache worker, and each worker has the full interpreter stack. Offloading that to nginx would minimize the resources necessary to deliver static content.


Top
   
 Post subject:
PostPosted: Wed May 18, 2011 8:41 pm 
Offline
Junior Member

Joined: Wed May 11, 2011 7:13 am
Posts: 32
I'm already running a Linode 1024. I think it should be more than enough for Drupal..

Actually, in most cases it is and the memory is stable at only 125mb. But in some cases, the memory just builds up -- maybe when google comes around..

When I stress test I request a couple of thousand pages, one after another. I then open the Piwik page, because it somehow is good at eating memory as well. And it slowly runs out. Taking 15 minutes on low settings, for example.

What happens is, it slowly builds, flips back, builds, flips back, etc. Like 100 > 150 > 250 > 350 > 250 > 350 > 450 > 500 > 425 > until it hits 950mb. Maybe this is natural. Maybe it's a bug. Can php scripts leak memory?


Top
   
 Post subject:
PostPosted: Thu May 19, 2011 3:41 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
Cheek wrote:
I'm already running a Linode 1024. I think it should be more than enough for Drupal..

Oops. Yeah, I think that should be more than enough for adequate performance too.

Quote:
What happens is, it slowly builds, flips back, builds, flips back, etc. Like 100 > 150 > 250 > 350 > 250 > 350 > 450 > 500 > 425 > until it hits 950mb. Maybe this is natural. Maybe it's a bug. Can php scripts leak memory?

That's where this isn't really making as much sense to me, at least not at the level of settings you've reached. With MaxRequestsPerChild of 1, a process is created and destroyed for each request. So even if the PHP interpreter is growing, it'll get destroyed at the end of the request. So while instantaneous peak usage should be MaxClients times your largest size to handle a single request. But you shouldn't see baseline memory usage growing over longer periods of time.

Unless there's some other long-lived process (database server, etc..) that is slowly growing and taking away resource. But that doesn't jive with earlier posts that you're seeing the memory all going to Apache processes. Hmm, as things grow are you seeing more than the expected MaxClients apache processes? Sometimes under load tearing down the old processes is actually high enough overhead to take time, so you can end up with more than you expect. Though normally I'd only expect to see that once you were already swapping and not before, so not something I'd expect to account for the early stages of growth.

A "leak" in the context of a web application stack typically refers to cases when a single request uses up some resource that stays allocated to the interpreter (which is part of the Apache process) after the request is done. And yes, most any code could have such a situation since the worker process/interpreter provides a global context that exists across requests. So that can accumulate over repeated requests if the same Apache process remains alive. But letting that process die will release all those resources so even if the code is leaking this way it shouldn't matter.

I'm pretty much running out of specific suggestions at this point. Maybe if you took a few process snapshots along the way (e.g., when you start your test, when you're using about half your memory, and then just before it keels over) there might be something that jumps out when comparing them.

It's likely that once identified the issue is going to be obvious in hindsight and we'll see where the troubleshooting missed a key bit of information or should have tried some other step, but for now, I don't really know...

-- David


Top
   
 Post subject:
PostPosted: Thu May 19, 2011 3:44 pm 
Offline
Senior Member

Joined: Sat Mar 12, 2011 3:43 am
Posts: 76
Location: Russia
Cheek wrote:
Can php scripts leak memory?

If script will be executed in loop (from cron).


Top
   
 Post subject:
PostPosted: Sat May 21, 2011 9:16 am 
Offline
Junior Member

Joined: Wed May 11, 2011 7:13 am
Posts: 32
db3l wrote:
It's likely that once identified the issue is going to be obvious in hindsight and we'll see where the troubleshooting missed a key bit of information or should have tried some other step, but for now, I don't really know...

-- David

I'm getting a bit desperate here. The server crashed again tonight when I was asleep. I had lowered MaxClients to 6 and MaxRequestsPerChild to 15 a few days ago.

Of course I knew this could crash the server, technically. But not in a 'real life scenario'.. These settings are pretty low and affecting the performance of my site.

I'm beginning to think to memory problem is some sort of 'bug', that only happens once in a while. Because whenever I check the memory consumption during the day, it's almost always lower than 150mb.

What I was thinking. Wouldn't it be possible to let apache reboot when the memory usage goes over 900mb?

And one last thing I noticed is that apache.conf has some lines about awstats. Is it possible awstats is somehow connected? If I comment the line 'Include /etc/apache2/awstats.conf' and reboot, will it be disabled?


Top
   
 Post subject:
PostPosted: Sat May 21, 2011 9:56 am 
Offline
Senior Member

Joined: Thu May 21, 2009 3:19 am
Posts: 336
How many of these resource hog stats analyzing packages are you running? awstats and piwik are not all that friendly to run. First thing I would try, cut those out of the picture all together and see if you still get crashes.

Then look at what Drupal modules you have installed if you're still getting crashes.


Top
   
 Post subject:
PostPosted: Sat May 21, 2011 12:22 pm 
Offline
Junior Member

Joined: Wed May 11, 2011 7:13 am
Posts: 32
waldo wrote:
How many of these resource hog stats analyzing packages are you running? awstats and piwik are not all that friendly to run.
I also run Urchin and Webalizer..

No, just kidding. :p

You make an interesting point though. As I noticed before, the Piwik dashboard was able to quickly consume my memory. But there's a javascript on every single page calling Piwik to register the user. If this script has the same problem as the one on the dashboard, that would explain the crashing when requesting a lot of pages.

I did some preliminary testing, and the results are promising. I'll do some more testing later and report the results..


Top
   
 Post subject:
PostPosted: Tue May 24, 2011 4:10 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
If all else fails, you could try a different web server, or try running PHP using fastcgi; that will tell you definitely if PHP is the culprit, since PHP will be running in its own processes.


Top
   
 Post subject:
PostPosted: Tue May 24, 2011 5:41 pm 
Offline
Junior Member

Joined: Wed May 11, 2011 7:13 am
Posts: 32
Guspaz wrote:
If all else fails, you could try a different web server, or try running PHP using fastcgi; that will tell you definitely if PHP is the culprit, since PHP will be running in its own processes.


Thanks for the tip!

I've moved Piwik over to another server and the Linode has been up since Saturday night without problems. I was able to crawl the whole site with 4 parallel threads without it crashing (with cache cleared).

These are the Apache settings I'm currently using:

Code:
<IfModule mpm_prefork_module>
    StartServers          3
    MinSpareServers       2
    MaxSpareServers       3
    MaxClients           12
    MaxRequestsPerChild  12
</IfModule>


Whether Piwik was the root problem, or just the one that made it public, I don't know. But we'll see somewhere in the future..


Top
   
 Post subject:
PostPosted: Wed May 25, 2011 11:07 am 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
The idea behind trying fastcgi would be that, with the PHP processes spun off separately, you could reproduce the problem and see if PHP or Apache was what was causing the problems. It would narrow down what's at fault.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group