Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Fri Jul 23, 2010 5:52 pm 
Offline
Senior Member

Joined: Sat Mar 28, 2009 4:23 pm
Posts: 415
Website: http://jedsmith.org/
Location: Out of his depth and job-hopping without a clue about network security fundamentals
kvutza wrote:
3) Developers oriented services: SourceForge, Launchpad, ...

Now that's interesting. Your definition of cloud includes SourceForge?

_________________
Disclaimer: I am no longer employed by Linode; opinions are my own alone.


Top
   
 Post subject:
PostPosted: Fri Jul 23, 2010 6:00 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
I agree with some other comments to the effect that "cloud" is a bit too nebulous, but within the overall hosting/provider/vps/cloud/whatever space, I think I mentally divide things among two primary axes:
  1. Granularity of computing resource (cpu, disk, network, etc...)
  2. Transparency of resource assignment (knowing where the resource is)
Both of these are a spectrum. Resources can be allocated anywhere from tiny fractions that have no direct correspondence to physical devices up through aggregation of multiple physical devices. And those resources can either clearly be associated with physical devices or any mapping to such physical devices can be completely obscured.

Now there are other features often touted in the "cloud", like self-healing or redundancy or transparent migration or whatever, but in general I find that the availability and impact of such features have a pretty good correlation to where on the spectrum of resource allocation the provider falls. If I'm using a service whose resource assignment is completely opaque, they're much more likely to be able to offer me a transparent migration/healing service since they can just move that assignment at any time without me noticing. Likewise with reallocating to a new level of resource.

If plotted in 2D, a physical PC I have on my desk might be considered to be at the low end of each metric (say the origin of the graph). That's also where a dedicated co-lo provider PC would be. It has absolute transparency of resource assignment - I can touch the hardware (or have the co-lo provider do so with their pair of hands). It has very low grained resolution of resources - exactly matching hardware. Changing either requires an intensive effort on my part and moving around real world devices.

I think the "cloud" (at least the ideal of its proponents) lies towards the other end of the spectrum - say way out there in quadrant 1 on the graph. That would be a provider who lets me allocate any resources (independently) at a very fine granularity, but who tells me nothing about where those resources are coming from or how they map to physical hardware. In that case making changes should be extremely low overhead, can take effect instantly, and I won't even have to know if my resources get moved around. I don't think this provider exists yet nor even can without real R&D and technology improvements (nor do I always think this is the best model, given real world constraints), but it's probably the ideal of the "cloud". Of course, very little of what is sold as the "cloud" actually comes anywhere near close to that ideal, though pure storage providers are probably closer than compute-server providers. But its a cool place to be heading.

To the original question of comparing a VPS and "cloud server", I think you have to identify how far apart the two choices are on my hypothetical graph. I think some current "cloud" offers barely differ from other current "VPS" offers, while other comparisons vary more significantly in one or both of the metrics. I don't think one answer fits all.

Specifically for Linode, I'd consider it close to the origin of my graph, so to me it's not really "cloud"-like. The same would hold true for most VPS systems (less so for those using a SAN structure). But I still currently prefer Linode's model for most uses. While there's only a few specific resource levels (and different resource types are coupled together), there's a lot of transparency as to how that resource maps to real world hardware.

The problem for me is that the more cloud-like you get in today's world, the more moving parts and things that can go wrong with less transparency. When it works it can be great and have very interesting features, but when it breaks down, it's tricky. As an administrator, simplicity does have it's advantages too. Not to mention that right at the moment, true cloud-like behavior (by my two metrics) costs more to deliver, and thus costs more to purchase. So economics needs to be weighed in for real life choices.

For example, SAN usage becomes important to help separate storage and cpu resources for finger grained allocation, but that leads to new failure modes, coupled with less transparency as to what's going on, which makes it harder for me as a user to deal with outages. Technology also isn't that good yet at melding multiple discrete hardware (say different PCs) together in a distributed fashion, so you'll still hit boundaries of CPU resource when you hit physical hardware "chunks".

Live migration is another thing that could be a killer feature (ask for more CPU than the current physical host has, just migrate to a bigger host without any downtime), but I don't think is mainstream yet. So while it interrupts more, I find myself more confident knowing with Linode just where I am migrating from and to, and the steps the process takes.

The "far upper right" cloud part of my graph will obviously continue to improve, but whether or not it meets all the goals of cloud-proponents is unclear quite yet. I think longer term we may need to change how hardware works. For example, some providers let you allocate things like cpu in small chunks, but what if the underlying hardware used to run the system itself was simple a humongous cpu farm (lots of little cpu modules in a big box) so you weren't stuck with an n-core box as a step function. If a "cloud pc" was really itself a combination of very granular cpu, memory and disk subsystems, we might get better failure modes while still having a highly granular service definition. You might say we can do that today by just using individual PCs as the subsystem, but then we need much better algorithms and techniques to tie those PCs together as a single compute fabric.

-- David


Top
   
PostPosted: Fri Jul 23, 2010 9:05 pm 
Offline
Senior Newbie

Joined: Thu Apr 21, 2005 9:32 am
Posts: 11
Website: http://www.tangloid.net/
jed wrote:
kvutza wrote:
3) Developers oriented services: SourceForge, Launchpad, ...

Now that's interesting. Your definition of cloud includes SourceForge?


Well, they are at least a "potential-cloud". I do not care what are the shell servers and if they vary, similarly with svn, download, etc. servers.


Top
   
 Post subject:
PostPosted: Sat Jul 24, 2010 7:25 am 
Offline
Senior Newbie

Joined: Sat Jun 02, 2007 1:11 pm
Posts: 8
My current billing statement at a well known cloud provider
shows a balance of 133 USD for a 9 day usage and 160GB traffic.

I believe I could handle 5 times the same load at half price with a Linode 2048.

So why I did not host this event on a Linode VPS?
May be it is not technically correct but I felt my VPS would affect
all the customers on the same host negatively during peek times so I decided to host on cloud.
But with cloud server I don't have this feeling, I will rent the share for a few days
and use it to its maximum power and terminate it.

So in my opinion there is no technical differences of a Linode VPS and Cloud type server.
But there is a psychological difference in the eyes of customers.


Top
   
 Post subject:
PostPosted: Sat Jul 24, 2010 8:24 am 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
If it's one of the well-known "cloud providers" I'm thinking of, they're no different than Linode, being a shared hardware+Xen situation. In other words, you fell for the marketing. :-)

Linode does a pretty good job of managing resource contention behind the scenes. As long as there isn't wanton disregard for the system as a whole (e.g. swap-thrashing, Spaceheating@Home), my rule of thumb is that any productive use of available resources is fair game. Even if it's just for a few days.

Perhaps it's the community aspect. You see other Linode customers as humans, but not so much the other cloud provider customers. But the psychology of business decisions is neither here nor there. :-)


Top
   
PostPosted: Mon Jul 26, 2010 1:16 pm 
Offline
Senior Newbie

Joined: Thu Apr 21, 2005 9:32 am
Posts: 11
Website: http://www.tangloid.net/
A few more notes on the cloudy features. May be that some of them are already here, but I do not know about that.

1) A possibility to load homegrown images (via Eucalyptus), to snapshot current system, and to have zero to N currently running instances off that snapshot.

2) Provide instance types for:
a) not to use too much of CPU and HDD (the current state)
b) any amount of CPU being able to use, just not thrashing the HDD
c) taking a private cloud with any CPU usage and HDD thrashing

3) Join cloud alliances, advertise your shiny LinodeCloud, make profit.


A bit off-topic:
Personally, I would like to have some solid anti-hdd-thrashing monitoring. Sometimes, I feel like somebody use twice more RAM (for a default LAMP) or something like that.
And a local anti-virus/anti-spam service would be great too. Mail checking does an amount of HDD loading too, thus leaning into HDD thrashing.


Top
   
 Post subject:
PostPosted: Mon Jul 26, 2010 2:22 pm 
Offline
Senior Newbie

Joined: Sat Jun 02, 2007 1:11 pm
Posts: 8
When can we apply for a cloud hosting account at linodecloud.com?


Top
   
 Post subject:
PostPosted: Mon Jul 26, 2010 3:43 pm 
Offline
Senior Member

Joined: Fri Dec 07, 2007 1:37 am
Posts: 385
Location: NC, USA
bor wrote:
When can we apply for a cloud hosting account at linodecloud.com?

Where they will charge twice as much for the same service?


Top
   
 Post subject:
PostPosted: Tue Jul 27, 2010 8:51 am 
Offline
Junior Member

Joined: Mon Feb 22, 2010 9:40 pm
Posts: 37
Stever wrote:
bor wrote:
When can we apply for a cloud hosting account at linodecloud.com?

Where they will charge twice as much for the same service?

Sign me up!


Top
   
 Post subject:
PostPosted: Tue Jul 27, 2010 2:14 pm 
Offline
Senior Newbie

Joined: Sat Jul 17, 2010 1:32 pm
Posts: 7
You must, I believe, sign up for cloud hosting with a data glove. It's the future of cyberspace.


Top
   
 Post subject:
PostPosted: Thu Jul 29, 2010 3:25 am 
Offline
Junior Member

Joined: Wed Apr 28, 2010 10:33 pm
Posts: 41
I think Rackspace are responsible for muddying the term cloud a bit. They have both VPS and high-availability shared hosting services, and use the word 'Cloud' to describe both.

I think the real difference between VPS and cloud is that VPS is a more specific term than cloud.

Just like 'chisel' is a more specific term than 'pointy thing'.

...

To add more to the discussion:

VPS does things that high-availability/clustered shared hosting aka "cloud sites" cannot do, in that it provides a fully virtualised server. You cannot clustering a virtualised machine over multiple physical machines, you can only cluster a higher-level environment, where you have less control over the hardware, installation of software, choice of operating system, etc and what you have is essentially glorified shared hosting. I don't need or want Linode to move in that direction.

As for whether Linode could get away with calling its existing Xen VPS service a 'Cloud' service, that's fine with me. I don't care what it's called as long as it's a virtual server, and it seems other people are using the term 'cloud' with similar services.


Top
   
 Post subject:
PostPosted: Fri Jul 30, 2010 7:12 pm 
Offline
Senior Member

Joined: Sat Dec 13, 2003 12:39 pm
Posts: 98
Ok I see another angle here.

To regular consumers, the cloud is the thing that sits out there and does stuff for them over the network and is always on. When I have a thin client app that stores part of its data in my server(s), so that my users can log in to other devices with this thin client and access their stuff, they describe this situation as their data living in the cloud, and they describe my app as being cloud-based.

They don't care if the cloud-enabled app is storing it is a VPS, a dedicated server, or trained hamsters with abacuses. They don't know what scalability is, but they know their friends can sign up too and also have their stuff be in the cloud. They know it sits out there somewhere, somewhere above and nebulous, and it's there when they need it from multiple devices/clients.

This term was used before for telco stuff. When you make a phone call, it goes through the cloud, they say, but only industry people would call have it that decades ago.

With telephony, there is an elaborate international switching system with a complex mix of hardware, facilities, companies, contracts, etc., that enable people to make phone calls from any device to any other device nearly instantly, through that cloud. I don't care what's in the cloud, I just care that I can almost always get to it instantly from a very diverse set of locations/devices.

Applying this concept to the internet is very natural. In diagrams the internet itself is often depicted as a cloud. Instead of the old telco cloud that just supports phone calls through the cloud, we have specialized services in the cloud that compute and store and possibly do other things.

So in my mind this isn't a technology stack question. The only time I see this term applied to technology stacks is when hosting companies want to leverage the cloud hype to sell something, and that's why nobody can agree on the meaning there. Pretty much anything could enable a cloud-based service, it's just that some technology works better than others to enable a particular use.

But from a user perspective, the concept of a cloud is actually very consistent, in my mind.

In the utility computing case, the cloud-based service is one that supplies on-demand computing resources, and the users are developers. This doesn't mean the developers need to themselves offer cloud-based services. They might or they might not. It just means that the users, who are developers in this case, can dial up the service, get computing resources, use them, and turn them off, like making a phone call.

A VPS could lend itself to this use because it's quick and cheap to provision a new VPS. But allowing average setup/teardown times to be low will always require some stock of unused VPS nodes. Regular servers could work just as well, with some sort of fast custom setup/teardown process, just that using them might result in higher costs that might not be justified.

But again the hardware stack shouldn't matter.

There are several services that have mobile phones wired up in racks so they can be interacted with remotely. You pay by the minute to use them to test your mobile app. I would call this a cloud based service. Whenever I want to test a mobile app, I dial it up and tell it which type of phone I want and run my app on it, and when I'm done I disconnect and am no longer billed. Of course they have to stock a lot of phones, track which types are in high demand, etc.. But in general, this is in my mind phone-testing in the cloud.

A VPS is totally analogous to a mobile phone in the above test rack. The VPS itself has nothing to do with being a cloud, it is just a resource/service that some people can choose to provision for and offer using cloud-based practices.

In my mind anyway..


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group