superfastcars wrote:
Okay, I'm talking about
this from the php.net site.. which says in linux environments, MPM is not "thread safe". I ran "httpd -l" and got this;
Compiled in modules:
core.c
prefork.c
http_core.c
mod_so.c
So it sounds like I'm using PHP5 in a CGI, and if I want ISAPI, it will require installing from source.. which I'd prefer, but I'm using CentOS cause my job uses it.. and CentOS is pro-rpm! However! My point is.. since I use CGI, I shouldn't need to worry about MPM, moot point.. but the other question about setting up a ngix proxy being a good idea for high number of requests??
They are specifically referring to
threaded MPMs, specifically not mpm-prefork. If you're using mpm-prefork, you will be OK, because it won't run into PHP's shortcomings.
I don't know enough about CentOS to give specific recommendations, but I generally avoid premature optimization. Outside of a few specific cases (like adjusting MaxClients in apache's configuration), it's difficult to know what problems you're going to have before you have them. Best to dive in and get started.
Quote:
I write my own applications, and I could easily write in my own api with fallback for inserting/retrieving data from two locations.. however, I want the replication to be both seamless, and transparent.. I do realize that there is a likelyhood of a mismatch.. I guess my real question here, is what is the best way? I've never done a failback before..

I purchased 2 years of hosting with linode after doing 2 weeks of research on what VPS. And I was a bit horrified after (1) month my VPS was down for 3-4 hours. So.. this is why I'm now very interested in failback technology. But I'm not spending anything till I can verify failover works.
Everything I know about MySQL replication I learned from
MySQL High Availability... it is a pretty good read on the techniques and tools of replication in MySQL.
My other secret tool is
Chef. MySQL will keep your data synchronized between locations (you DO store all of your data in your database, right?), and Chef will keep your configuration and operational state synchronized among your servers. SVN or git are excellent for keeping your code straight, too.
(There are things other than Chef, too, like Puppet... we just use Chef because we use Chef. Anything is better than nothing.)
The other problem is how to direct traffic to the right location. Multiple A records in the DNS will do the trick, but it doesn't automagically withdraw the downed site when it goes away. You can either wing something yourself (the
Linode API can help with this), or go with something like
DNS Made Easy.
Quote:
So your saying.. it can work, but it might not work well depending on each application? That makes alot of sense, and I can probably look to find jail-friendly apps in the mean time to see if it's even worth it really. I want to use my router as a failover, but I don't want to leave it completely open, that's why I ask about jailed environments.
A chroot jail only limits the damage after a security breach, and arguably not that much. Not my first choice of ways to spend my securing time

Quote:
I got a VPS specifically to learn more about linux, and web hosting technologies. I personally think that Citadel is designed for newbies, Exim/Postfix are more accepted/updated/mainstream, and Qmail is crap. I want to use Qmail because Plesk 9 uses it.. and where I work we use Plesk.
Best of luck to thee!
I personally use Exim for most servers (the ones that just need to send mail out), Postfix for more robust needs, and Google Apps (or sometimes just Gmail) for receiving mail. This seems a decent combo.
_________________
Code:
/* TODO: need to add signature to posts */