I agree with some other comments to the effect that "cloud" is a bit too nebulous, but within the overall hosting/provider/vps/cloud/whatever space, I think I mentally divide things among two primary axes:
- Granularity of computing resource (cpu, disk, network, etc...)
- Transparency of resource assignment (knowing where the resource is)
Both of these are a spectrum. Resources can be allocated anywhere from tiny fractions that have no direct correspondence to physical devices up through aggregation of multiple physical devices. And those resources can either clearly be associated with physical devices or any mapping to such physical devices can be completely obscured.
Now there are other features often touted in the "cloud", like self-healing or redundancy or transparent migration or whatever, but in general I find that the availability and impact of such features have a pretty good correlation to where on the spectrum of resource allocation the provider falls. If I'm using a service whose resource assignment is completely opaque, they're much more likely to be able to offer me a transparent migration/healing service since they can just move that assignment at any time without me noticing. Likewise with reallocating to a new level of resource.
If plotted in 2D, a physical PC I have on my desk might be considered to be at the low end of each metric (say the origin of the graph). That's also where a dedicated co-lo provider PC would be. It has absolute transparency of resource assignment - I can touch the hardware (or have the co-lo provider do so with their pair of hands). It has very low grained resolution of resources - exactly matching hardware. Changing either requires an intensive effort on my part and moving around real world devices.
I think the "cloud" (at least the ideal of its proponents) lies towards the other end of the spectrum - say way out there in quadrant 1 on the graph. That would be a provider who lets me allocate any resources (independently) at a very fine granularity, but who tells me nothing about where those resources are coming from or how they map to physical hardware. In that case making changes should be extremely low overhead, can take effect instantly, and I won't even have to know if my resources get moved around. I don't think this provider exists yet nor even can without real R&D and technology improvements (nor do I always think this is the best model, given real world constraints), but it's probably the ideal of the "cloud". Of course, very little of what is sold as the "cloud" actually comes anywhere near close to that ideal, though pure storage providers are probably closer than compute-server providers. But its a cool place to be heading.
To the original question of comparing a VPS and "cloud server", I think you have to identify how far apart the two choices are on my hypothetical graph. I think some current "cloud" offers barely differ from other current "VPS" offers, while other comparisons vary more significantly in one or both of the metrics. I don't think one answer fits all.
Specifically for Linode, I'd consider it close to the origin of my graph, so to me it's not really "cloud"-like. The same would hold true for most VPS systems (less so for those using a SAN structure). But I still currently prefer Linode's model for most uses. While there's only a few specific resource levels (and different resource types are coupled together), there's a lot of transparency as to how that resource maps to real world hardware.
The problem for me is that the more cloud-like you get in today's world, the more moving parts and things that can go wrong with less transparency. When it works it can be great and have very interesting features, but when it breaks down, it's tricky. As an administrator, simplicity does have it's advantages too. Not to mention that right at the moment, true cloud-like behavior (by my two metrics) costs more to deliver, and thus costs more to purchase. So economics needs to be weighed in for real life choices.
For example, SAN usage becomes important to help separate storage and cpu resources for finger grained allocation, but that leads to new failure modes, coupled with less transparency as to what's going on, which makes it harder for me as a user to deal with outages. Technology also isn't that good yet at melding multiple discrete hardware (say different PCs) together in a distributed fashion, so you'll still hit boundaries of CPU resource when you hit physical hardware "chunks".
Live migration is another thing that could be a killer feature (ask for more CPU than the current physical host has, just migrate to a bigger host without any downtime), but I don't think is mainstream yet. So while it interrupts more, I find myself more confident knowing with Linode just where I am migrating from and to, and the steps the process takes.
The "far upper right" cloud part of my graph will obviously continue to improve, but whether or not it meets all the goals of cloud-proponents is unclear quite yet. I think longer term we may need to change how hardware works. For example, some providers let you allocate things like cpu in small chunks, but what if the underlying hardware used to run the system itself was simple a humongous cpu farm (lots of little cpu modules in a big box) so you weren't stuck with an n-core box as a step function. If a "cloud pc" was really itself a combination of very granular cpu, memory and disk subsystems, we might get better failure modes while still having a highly granular service definition. You might say we can do that today by just using individual PCs as the subsystem, but then we need much better algorithms and techniques to tie those PCs together as a single compute fabric.
-- David