In view of the recent issues at Atlanta...
A little while ago, we were inconvenienced by a series of DDoS attacks on Hurricane Electric (Fremont). I'm sure that The Planet (Dallas) has also had its problems.
The simple fact is that you will never get 100% uptime, anywhere. Buildings burn down (or in the case of HE, fall into the St Andreas fault

) and other disasters can occur.
No matter how many redundant anythings you have in a data centre, you will get outages. If we, as users, have mission-critical sites/applications running, it falls to us to provide contingencies on top of those provided by Chris and Team, data centre staff, etc.
My approach is to have a redundant Linode but NOT in the same data centre. I don't have an automatic failover system but dump my databases HE databases every night and transfer them over to The Planet. Likewise, any uploaded files get rsync'd over. These dumps/files are also copied down to the server in my office as part of the process. Hey, if we lost the USA, I could run the whole lot off my laptop although I wouldn't like to say what sort of shape the InterWeb would be in
If I get an outage that looks like it's going to persist, I load up the databases from the dumps at The Planet, change DNS (I have a short TTL set) and about half an hour later, am running on the secondary.
There are more elegant ways in which this can be done - multiple replicated databases, round-robin DNS, etc., but these are not something that I or my clients (all small businesses) can afford.
What we should not do is to turn round and blame Linode when we ourselves have failed to identify and make contingencies for a single point of failure in a critical system.