My favorite approach so far is to use a configuration management system -- chef, puppet, ansible, whatever -- to define what each of your servers does. New ones can be deployed in a hurry. I tend to use
fabric and
libcloud to handle server deployment, so deploying the cluster on another provider involves changing very little. As a bonus, your servers are now described by source code. This is sort of an evolved "install script" concept, but it lasts for the entire lifecycle of a server.
Deploy new servers to replace old servers once in awhile. It might be interesting to try never rebooting your servers: instead, instantiate a new one, then destroy the old one. Your cluster is a multi-cellular organism.
I've had good luck with DNS Made Easy for general customer-facing domains, and Amazon Route 53 via libcloud for the server FQDN domain.
Your data -- files, databases, etc -- would still need to be handled somehow, but Amazon S3 is quite workable for general static content storage/serving as well as storing of (encrypted, presumably) database dumps.
A common pattern here is diversity: relying on one provider for everything (even Linode!) is just plain silly. If your domain registrar, DNS host, mail provider, and VPS provider are the same company, you're going to have a bad time.
_________________
Code:
/* TODO: need to add signature to posts */