Azathoth wrote:
In which case, back to my original question. If fair sharing is enforced, how's anyone a bad neighbor? If you're using all of the CPU, and I don't need it, why should I complain. If I start needing it, I can rely on fair sharing to get what "belongs" to me.
I think it's a question of equality vs. expectations. Yes, if multiple guests are all going full bore with CPU, they'll all at least get an equal share of the available cores that they share[*]). So in that sense, the CPU is being shared equally.
However, in practice, especially at the lower Linode sizes, there are lots of guests on a host, and it's pretty rare for them all to use maximum CPU simultaneously for long periods of time, and expectations for performance end up being established based on what actually happens. This is definitely different than a guarantee (e.g., on a Linode 512 you're only really guaranteed like 20% of a single core, assuming 20 guests per 4 cores[*]), but part of what makes Linode work so well is that you virtually always have larger burst levels available just due to the statistics of sharing.
The concept of a "bad neighbor" is soft, so I don't think it can be precisely defined. But I do think that there are expectations that most Linodes can burst quite high CPU-wise, so if a small number of guests on a host are burning all available CPU (even if shared equally), the experience will suffer in general. Taken to the extreme, if everyone tended to use 100% of the CPU they could get (even if shared) I suspect Linodes would be a whole lot less attractive.
In other words, over the long haul we actually all benefit by no one guest (or small number of guests) trying to take as much as possible, even if fairly distributed. Except of course for someone who would otherwise really use the maximum CPU for a long period of time in which case that runs slower over the long term by not taking as much. But that's where the concept of good or bad neighbor can come in. The up thread reference to tragedy of the commons is reasonable I think, if imperfect since CPU isn't a resource to deplete except over short time periods.
The nice part of all this is that it usually "just works" - e.g., do what you need and it all sort of evens out. But when you start talking about stuff like SETI (which will take whatever CPU it can get for absolutely as long as possible) that begins to break the statistical sharing that, in reality, we all benefit from.
-- David
[*] I believe that on a typical 8-core host, the available guests are all set to a maximum burst of 4 cores, so it's possible if two guests are actually on disjoint subsets of 4 cores they wouldn't interfere at all.