Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Tue May 15, 2012 8:04 am 
Offline
Newbie

Joined: Tue May 15, 2012 7:51 am
Posts: 4
Hi, I've had my Linode 512 for a year but I'm still a newbie as far as set up and optimisation goes.

I'm launching an iPhone game that will be turn-based over the internet (think Draw Something but far far less game data) and I'm using my Linode to deal with all the server side stuff. My set up is Ubuntu 10.04 with Apache and PHP.

All the server side scripts the game uses are PHP and the database is external on Amazon DynamoDB.

Could my Linode 512 cope with 1 request per second, how about 10 or 30 ? The PHP scripts will be around 1kb or less and the amount of data going in and out will be just bytes.

Obviously if the game is successful and takes off then I'm prepared to migrate to a higher spec Linode but what can I expect to get out of my current set up ? Any optimisation tips? Any advice would be very helpful. Thanks.


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 8:55 am 
Offline
Senior Member

Joined: Fri Feb 17, 2012 8:20 pm
Posts: 365
AFAIK really depends on what runs on it. Have a look at apachebench, you can use it to benchmark / send a lot of requests so you can see how it impacts the load :)


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 10:08 am 
Offline
Senior Member

Joined: Sun Mar 07, 2010 7:47 pm
Posts: 1970
Website: http://www.rwky.net
Location: Earth
Quote:
database is external on Amazon DynamoDB.

That will be your bottleneck communicating from linode to amazon over the network will be the slowest part. Consider adding a caching layer on the linode itself it will allow you to serve many more requests.

_________________
Paid support
How to ask for help
1. Give details of your problem
2. Post any errors
3. Post relevant logs.
4. Don't hide details i.e. your domain, it just makes things harder
5. Be polite or you'll be eaten by a grue


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 10:12 am 
Offline
Newbie

Joined: Tue May 15, 2012 7:51 am
Posts: 4
Thanks guys. I'm just setting up nginx and fastcgi as I believe this can deal with more requests than apache.

What do you mean by cache layer? If every request will be dynamic how will this help and can you point me in the right direction?


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 10:18 am 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
A 512 can handle between zero and one million hits per second. Where you fall between that range is entirely dependent on your architecture.

There will be a lot of latency making queries to a remote database over the internet (using Amazon DynamoDB from a Linode will be slow). Because of that, caching as much data locally as possible will help.


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 11:50 am 
Offline
Senior Member

Joined: Sun Mar 07, 2010 7:47 pm
Posts: 1970
Website: http://www.rwky.net
Location: Earth
figgy wrote:
What do you mean by cache layer? If every request will be dynamic how will this help and can you point me in the right direction?


Without knowing your application details I can't make specific comments but if there's a lot of read only or rarely modified data your app pulls caching that locally will help a lot.

_________________
Paid support
How to ask for help
1. Give details of your problem
2. Post any errors
3. Post relevant logs.
4. Don't hide details i.e. your domain, it just makes things harder
5. Be polite or you'll be eaten by a grue


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 11:59 am 
Offline
Newbie

Joined: Tue May 15, 2012 7:51 am
Posts: 4
I see what you mean. Other than than the local php scripts nothing will be static. Every request will get dynamic game and user data.


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 12:19 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
Right, but if every request done by a specific user is pulling in their user details, but the user details aren't changing, then you can cache that locally using something like memcached or apc. It's dynamic data coming in, but not all of that dynamic data will have changed since the last time you requested it from the database.

For example, if every request done by user A results in you doing the equivalent of "select * from user where username='user A'", but the user table isn't changing unless you explicitly do an update, then you should be caching the response in PHP such that you do something like this (not real code or caching mechanism, just illustrative):

Code:
if ( empty($userCache[$username]) )
{
  $userCache[$username] = GetUserFromDB($username);
}

$userDetails = $userCache[$username];


You would then also update the local user cache any time you updated the user's details. Personally, I'm not an expert on this stuff: I normally work on projects small enough that the database is local to the same server, and I rely on the database and filesystem caches. But if your database server is, instead of being on the same server, running on some remote platform living in a different city, then this sort of caching is something you need to think about.

There are a variety of tools that let you store persistent data in memory that lasts between requests. APC is popular because it acts as a PHP accelerator on top of giving you this caching functionality.

EDIT: I should note that APC's memory caching is local: it's only a good idea if you have a single web server. If you have multiple web servers distributing the load, the memcached (which is a distributed cache) is required to keep data consistent between servers.


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 1:02 pm 
Offline
Newbie

Joined: Tue May 15, 2012 7:51 am
Posts: 4
Thanks Guspaz for the thorough explanation. I understand now and will definitely look into whether such caching would benefit my set up.


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 2:52 pm 
Offline
Senior Member

Joined: Mon Dec 07, 2009 6:46 am
Posts: 331
Guspaz wrote:
A 512 can handle between zero and one million hits per second.


Not quite. Assuming shortest possible request header:

GET / HTTP/1.1\n
Host: www.example.com\n
Accept: text/html\n\n

which amounts to 60 bytes, and let's say the response is

HTTP/1.0 200 OK\n
Date: Fri, 31 Dec 1999 23:59:59 GMT\n
Content-Type: text/html\n
Content-Length: 0\n\n

which amounts to 104 bytes, let's say then the minimum request-response cycle is 160 bytes.

At 50MBps which is 6.25 MB/s, divided by 160 bytes amounts to cca 40k requests per second of dead traffic without content. And this does not include TCP ACK's and resends in error (assuming full keepalive after first request).


*knocks over the sarcasm sign and runs for his life*


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 3:31 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
If you want to get pedantic, Linode's limit is only on outbound, and the Date header isn't required, leaving us with 67 bytes, so doing one million responses per second requires 511 Mbps, which if you're sustaining constantly, Linode can raise your limit to (they raise the limit if the customer needs it) :P


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 4:01 pm 
Offline
Senior Member

Joined: Mon Dec 07, 2009 6:46 am
Posts: 331
Guspaz wrote:
Linode can raise your limit to (they raise the limit if the customer needs it) :P


Yes, but I don't think they'll raise your limit to 500+ Mbps on a 512 node. And that's for the theoretical dead traffic. Any meaningful content payload would require much more. :wink:


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 4:55 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
Azathoth wrote:
Guspaz wrote:
Linode can raise your limit to (they raise the limit if the customer needs it) :P


Yes, but I don't think they'll raise your limit to 500+ Mbps on a 512 node. And that's for the theoretical dead traffic. Any meaningful content payload would require much more. :wink:


Well, we're talking about roughly 160,000 gigabytes per month... If you throw $192,000 a year at Linode, I think they'll raise your limit to 500 Mbps for you ;)


Top
   
 Post subject:
PostPosted: Tue May 15, 2012 5:17 pm 
Offline
Senior Member

Joined: Mon Dec 07, 2009 6:46 am
Posts: 331
Guspaz wrote:
Well, we're talking about roughly 160,000 gigabytes per month... If you throw $192,000 a year at Linode, I think they'll raise your limit to 500 Mbps for you ;)


No, the question was how many hits per second can 512 node sustain. You just turned that around back into 1M and then readjusted Linode's plans to accomodate a node for $192k/yr.

But even in that scenario, I still doubt Linode would do that for a 512 node. They'd sooner suggest you to switch to a bigger node, if at all they have 10Gbps NICs on the hosts, because we're talking half of a 1Gbit nic just for 1M hits of dead traffic. Which would probably require switching readjustments.

Also, I doubt the free inbound would still apply at that level of traffic. And let's not forget there's bandwidth cap for regular sized nodes, bw does not scale up beyond a 2GB node, I guess there's a reason for that. :wink:


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group