Cheek wrote:
Ok, so I guess the problem I described above is how apache is supposed to work.. It doesn't care about your memory?
Depends on what you mean by "doesn't care". If you mean will it obey the settings in terms of when to create new worker process and how long to let the same process service requests independent of what memory is being used, then yes. But it's not like it's malicious or anything :-)
Quote:
I've now settled on even lower settings. It seems to make some images load pretty slow, but at least the server doesn't seem to be crashing (yet).
That's your most important goal at this point. And from the sequence of posts it would seem like you're working your way to at least a safe set of settings, if not ideal for performance.
Once you've stopped the bleeding, so to speak, then you can start thinking about performance tweaks, while staying within your available resource. With lower MaxClients it does mean that each request may get delayed slightly until a worker process is free, which could be what you're seeing with the images.
The issue is that the same Apache worker pool is being used for both full scripting pages (running Drupal) as well as simple static files. So now, this is, for example, a place where considering steps like using nginx as your front-end server, proxying drupal portions of the site back to Apache, is reasonable.
That way, static content can be delivered quickly (and with very low memory overhead) from nginx, while everything else still goes through the full scripted/Apache route.
-- David
PS: On the piwik thing, I'm afraid I'm out of my depth as I have no experience with that. I find it hard to believe you're seeing unbounded process growth if you keep MaxRequestsPerChild low (and maybe 30 is even too high), but since I don't know what it does or how it works, I'd have to leave that aspect of things to someone else.