Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Thu Dec 06, 2007 12:19 pm 
Offline
Senior Newbie

Joined: Fri Nov 30, 2007 11:07 pm
Posts: 8
Hi,

Just got my linode setup recently with Debian 4, Apache, MySQL, Ruby on Rails with Mongrel etc... and very happy with everything.

I am thinking about the future, already, and wondering whether I should consider getting another linode and clone my existing one before I get too far down the line with hosting websites for people, or whether I should just beef up the linode I have if and when that is required.

One thing I am considering is; I want to run a subversion server on my linode, already am, and store all the code there for my projects. Now I could set up a separate linode for this using it for a test server, and then setup capistrano to deploy to my 'production' server when all is ready, or just do it all on the one linode and expand as required.

i quite like the notion of separation but feel the test server with subversion would get very little use and may not be worth it financially. On the other hand, it may be a safer way to work with less potential problems.

Does anyone have an opinion on this? I know it all sounds a bit airy fairy but just wondering if anyone has been through this scenario of late.

Thanks,
Paul


Top
   
 Post subject:
PostPosted: Thu Dec 06, 2007 8:18 pm 
Offline
Senior Member

Joined: Sun Nov 30, 2003 2:28 pm
Posts: 245
If by "hosting websites for people", you mean actual paying customers, I think you should have a development/test server on a seperate linode. OTOH, if you mean "friends who sometimes buy me a beer", then it's probably fine to have one big server.

_________________
The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world.
-- seen on the net


Top
   
 Post subject:
PostPosted: Thu Dec 06, 2007 8:43 pm 
Offline
Senior Member
User avatar

Joined: Thu Jun 21, 2007 7:13 pm
Posts: 100
Website: http://neo101.org
I have two 300 linodes. One for production and one for development. If you can afford $20 extra per month to have a development environment I highly recommend it. That way you could clone your production environment and with a few clicks copy the whole image to your development linode. After it is finished you can do whatever experiments you want to try out on the development server without risking any mistakes. Since a short time ago Linode.com offers easy Dashboard functionality for merging two accounts so you can transfer images like described above, very easily.

Today I never do anything on my production server no matter how trivial it seems. I try all my changes on my development server first. I develop a lot faster since nothing I do has any risks of breaking anything. They don't charge you bandwidth quota for transferring an image from one linode account to your other. But it takes a little longer to transfer it from one host to another than it takes to make a local copy. So I do these host-host transfers only when I mess something up and my latest backup isn't very recent.

Five years ago I worked in a company that did all their development live. We where four programmers and downtimes to our web service could be like 30-60 minutes per day because we always managed to break /something/.

And if you have lets say a 600 Linode for $40 / month you still only need a 300 Linode for $20 / month for your development environment because you would be the only user on the development version of your site.

I made a perl script that looks at the output of "ifconfig" to see what ip dchp gave it. When the script knows what ip is has, it can "ifup eth0:3" for example since it knows that it has been booted from the development account (I have two ips for each Linode). It can also replace my cron file to a different one that has "rss2email" disabled. It was quite annoying to get two emails every time one of my rss feeds got updates. One email from the production environment and one from the development.

The documentation says that one /must/ edit the cron file using the cli command "crontab -e" and /not/ edit the actual crontab file by hand. I tried to ignore that instruction to see if it would work anyway. And it seems it does. So ignore this crontab instruction at your own risk.

So I would run all services on one linode account and use the other for only development/testing stuff.

Oh, and don't forget to shut down your production server before copying it to your other linode account. I tried to ignore that warning too just to see what would happen, and got two rows of corrupted mysql data in one of my Mediawiki tables. So from that day on I always shut my production server off before making the copy. I always make a local copy first before I transfer it to my development account, so I can boot up the production server as fast as possible. Oh, and don't allocate all your available hd space to one image if you don't need that much space. If your image is smaller it gets copied and transferred faster. You could always shut down your production server and resize your image in case you need more space later. That is if you use ext2 or ext3. In those cases you can use Dashboard to do it. If you have a different type of partition I don't think Dashboard can do it for you; you'd have to do it yourself. But I may be wrong since I only use ext3 partitions and haven't tried to ignore that particular warning yet.


Top
   
 Post subject: 2 it is then
PostPosted: Thu Dec 06, 2007 10:49 pm 
Offline
Senior Newbie

Joined: Fri Nov 30, 2007 11:07 pm
Posts: 8
Thanks for your replies; I think I will look to clone my existing server to another linode soon when I have all the software I want configured so I can have a play server to make mistakes on and act as a subversion host.

On the prod box however, would you go with one box and keep upgrading with regards disk space, RAM, bandwidth etc till you can go no further and then get a new linode; or go with multiple smaller linodes?

i.e. 3/4 300 or 450 linodes or just keep going till you get everything running on a 1200.

Do you feel there is any real difference or advantages to be gained from one or the other?

Thanks again for your replies,
Paul


Top
   
 Post subject: Beer money or real money
PostPosted: Thu Dec 06, 2007 10:53 pm 
Offline
Senior Newbie

Joined: Fri Nov 30, 2007 11:07 pm
Posts: 8
Forgot to answer the first point; I intend on hosting for customers, as well as some of my own stuff.

The money may end up as beer tokens in the end but I hope to make more than just a beer or two :)

Cheers


Top
   
 Post subject:
PostPosted: Fri Dec 07, 2007 12:01 am 
Offline
Senior Member

Joined: Sun Nov 30, 2003 2:28 pm
Posts: 245
My inclination would be one big linode: less to manage, and also more efficient, assuming that incremental resource requirements for the second, third, etc. users is less than that for the first. (Does that make sense? What I mean is that presumably the total resouce requirements of two users running on one Apache/MySQL/Whatever system is less than two seperate installs.)

OTOH, if your users are distributed over multiple linodes, tbe amount of damage that one can do more limited.

_________________
The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world.

-- seen on the net


Top
   
 Post subject:
PostPosted: Fri Dec 07, 2007 11:05 am 
Offline
Senior Newbie

Joined: Fri Nov 30, 2007 11:07 pm
Posts: 8
Well, I think I will stick with the one production server option for the time being, but am a little concerned that too many clients on the one box leaves me open for a pretty big failure.

Should one of my linodes go down or should there be any problems with my physical host; all my clients suffer. Splitting machines across two hosts and having fewer clients on one machine seems a bit safer to me and means I could have the client config waiting to be enabled (hot stand by effectively) on another machine should there be any problems.

I agree that two installs for two users is worse than two users on one install, but does this scale? :) I would have thought apache/mysql would get to a point where regardless of resource the performace would be less on one machine than on two separate.

Luckily I am not even close to having to worry about that, but wanted to have a think about it now to prepare myself and see if anyone had any concrete scenarios where they had to move to another box.

Should I decide to go with the multiple box scenario (hoping I get enough business to care) I will write back and let people know why I made that decision.

All the best.


Top
   
 Post subject:
PostPosted: Fri Dec 07, 2007 9:20 pm 
Offline
Senior Member

Joined: Fri Feb 13, 2004 11:30 am
Posts: 140
Location: England, UK
harmone wrote:
The documentation says that one /must/ edit the cron file using the cli command "crontab -e" and /not/ edit the actual crontab file by hand. I tried to ignore that instruction to see if it would work anyway. And it seems it does. So ignore this crontab instruction at your own risk.


"crontab -e" isn't the only switch that crontab(1) supports. In fact, if you run "crontab -u username /path/to/file", it'll replace the crontab of that user with that file. If you want to use environment variables, try "crontab -u username - <<HERE" and use a heredoc instead, in which you could use environment variables. Éither way, this lets you programmatically change your crontab without any worries about what might happen. Hope this helps. :)


Top
   
 Post subject:
PostPosted: Sat Dec 08, 2007 3:49 pm 
Offline
Senior Member
User avatar

Joined: Thu Jun 21, 2007 7:13 pm
Posts: 100
Website: http://neo101.org
Ciaran wrote:
harmone wrote:
The documentation says that one /must/ edit the cron file using the cli command "crontab -e" and /not/ edit the actual crontab file by hand. I tried to ignore that instruction to see if it would work anyway. And it seems it does. So ignore this crontab instruction at your own risk.


"crontab -e" isn't the only switch that crontab(1) supports. In fact, if you run "crontab -u username /path/to/file", it'll replace the crontab of that user with that file. If you want to use environment variables, try "crontab -u username - <<HERE" and use a heredoc instead, in which you could use environment variables. Éither way, this lets you programmatically change your crontab without any worries about what might happen. Hope this helps. :)


Oh cool. I didn't know that. I'll change my script so it behaves canonically just in case. I also didn't know about heredocs. I have used them in Perl but never knew it was a general concept and not just a feature of Perl. Here is a good article for everyone else who also has never heard of heredocs:

http://en.wikipedia.org/wiki/Here_document


Top
   
 Post subject:
PostPosted: Wed Dec 12, 2007 4:54 pm 
Offline
Senior Member

Joined: Sun Nov 30, 2003 2:28 pm
Posts: 245
macforum wrote:
I agree that two installs for two users is worse than two users on one install, but does this scale? :) I would have thought apache/mysql would get to a point where regardless of resource the performace would be less on one machine than on two separate.


The answer to this is highly dependent on the actual load. For the extreme cases of bandwith limited or disk i/o limited loads, then two machines will, obviously, be superior. If you load is memory limited, I'd guess that a machine with twice as much usuable memory would be superior to two machines. Of course, you can't increase memory arbitrarily, so eventually you'll need multiple machines anyway.

But the split may not be the obvious "half the people on one, half on the other". It's conceivable splitting between web-server and RDBMS-server would be superior. But the whole thing is really dependent on what your users are doing.

_________________
The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world.

-- seen on the net


Top
   
 Post subject: ....
PostPosted: Thu Dec 13, 2007 12:41 pm 
Offline
Senior Newbie

Joined: Fri Nov 30, 2007 11:07 pm
Posts: 8
My dev work comes from a company background; meaning I have never really looked to own/admin my own servers before and have typically been pretty persuasive in getting the servers I need :)

Yes; splitting the db away from the webserver was always the case for me; and being from a Java/J2EE background I have generally ran clustered app servers on separate machines, but this just isn't as easy when you have to buy and administer your own servers :)

I will continue with one big server and see how things go although another concern I have is that I want to run Ruby on Rails apps with Mongrel, and the common approach seems to be running a Mongrel cluster per application; it will be interesting to see what kind of resource that takes up!

I just bought another linode and placed it in a separate data center for backup and testing things out (thanks for the replies about cron tab too; I will look to set that up when I start up my cloned server.), if anyone has any advice on backup strategies please shout.

It seems pretty related to this thread, I am interested in backups relating to one fat prod server and one test/dev/backup box.

I am looking at RDiff just now for nightly backups, I am certainly not going to shutdown my prod server daily to clone across the image.

Any advice would be appreciated.

All the best, Paul


Top
   
 Post subject:
PostPosted: Thu Dec 13, 2007 4:24 pm 
Offline
Senior Member

Joined: Sun Nov 30, 2003 2:28 pm
Posts: 245
What kind of backup do you need/want? If all you need is a recent copy of what's on the server, then one of the rsync based tools is probably easiest. If you want something fancier, with multiple backups and differential/incremental support, I've found Bacula (http://www.bacula.org/) to be a good tool. The initial learning curve is high, as is the complexity, but once it's set up, it runs reliably. I use it to backup (some of) my linode to my home "server", along with various home computers. It's probably overkill for my purposes, but I was already familiar with it from work.

A more interesting question is *what* to backup. I set up a /srv partition that has all my real data - web apps and data, images, and such. I backup that, and "/etc", and /var/backup, which the Debian system updates with current package selections and such. Also, a script runs before the backup job to do a mysqldump, rather than trying to backup the live DB files. I don't backup the actual system files (/usr and such), and I don't keep any vital data in /home. So I can't do a "bare-metal" restore; I'd need to start with a basic Debian image, restore my package selections and install, restore /etc, and then /srv.

Oh, I also track /etc with Mercurial. I don't put everything in, but any time I change a config file, I add it. This isn't really a backup, but it does let me track the config changes I make, and make it less painful to recover when I do something stupid.

_________________
The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world.

-- seen on the net


Top
   
 Post subject:
PostPosted: Thu Dec 13, 2007 5:21 pm 
Offline
Senior Newbie

Joined: Fri Nov 30, 2007 11:07 pm
Posts: 8
Hey,

Well; I plan to have multiple domains on my server, with different clients, so I would need to have access to certain backed up data to restore without restoring the whole server. RDiff seemed quite nice for this, but happy to consider any options as I have no experience with any of these types of tools.

I was thinking of keeping a reasonably up to date clone of my prod server on my backup/dev machine, so I could restore that on prod then run RDiff to get the previous nights application data back to a reasonable state. That way I would only need to RDiff app data.

However, If I could just do a base debian install and then RDiff/whatever tool I choose my previous evenings data back across such that that data was sufficient to get my user accounts and everything else back up and running, that would be preferable as I would not have to take clones of prod, which means having to shut down the server.

Main thing is; can you back up sufficient data to restore directly from an RDiff/whatever after doing a straight Debian install?

Cheers,
Paul


Top
   
 Post subject:
PostPosted: Thu Dec 13, 2007 8:01 pm 
Offline
Senior Member

Joined: Sun Nov 30, 2003 2:28 pm
Posts: 245
macforum wrote:
Well; I plan to have multiple domains on my server, with different clients, so I would need to have access to certain backed up data to restore without restoring the whole server.

Oh, certainly. And bacula can do that. One of the many ways to restore files in bacula is a) select a client, b) select "most recent" or "as of _date_", and c) browse the "filesystem" that bacula then presents, select directories and/or files to restore. I don't mean to be too pushy with bacula, it's definitely a beast. But it is a serious backup solution, with lots of users and testing. I trust it with my data.

macforum wrote:
Main thing is; can you back up sufficient data to restore directly from an RDiff/whatever after doing a straight Debian install?


Sure. The price you pay is that you end up backing up all your applications as well. For ME, I'd rather use the backup space for longer backups of data, and let Debian backup the applications. The price I pay is a more complicated restore procedure. That's okay for me, because a longer downtime for a restoration is acceptable. One important point: the base install *must* have the tools needed
to restore, or at least be able to install them. It's really important to work through your restore procedure step-by-step, and think about the problems that might occur at each step.

Another thing to consider: backups to a non-Linode. Linode.com seem to be a successful business, and I've no reason to expect any problems, but...things happen. Consider what happens to your business if linode.com disappears, and you scramble to find alternative hosting. If you don't have your client's data, you're still screwed.

_________________
The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world.

-- seen on the net


Top
   
 Post subject:
PostPosted: Thu Dec 13, 2007 8:49 pm 
Offline
Senior Newbie

Joined: Fri Nov 30, 2007 11:07 pm
Posts: 8
Thanks for the info; I'll have a look at bacula.

Good point to note too that obviously the debian install would require the tools to actually connect and restore from backup :) That's be a crap one to forget!

That was one reason I considered storing a clone of my image on my backup server to copy across to prod on catastrophic failure, and then just run the restore for the previous evenings data. My cloned image would have all the required tools installed and so remove this step, which may turn out to take time or be error prone, especially under the pressure of clients sites being down. I may be able to script the install of the backup software though and remove any human pressure stupidities. We'll see.

I'll try running both scenarios and see what makes most sense.

If you have any links to good tutorials on bacula I would be interested.

Thanks again Steve,
Paul


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group