Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Sat Apr 21, 2012 11:55 am 
Offline
Junior Member

Joined: Mon Jun 13, 2011 8:11 pm
Posts: 23
Hi,

Does anyone know how to copy a disk image of a whole linode server to a bucket in amazon aws S3?

I have tried following all the articles in the library, logged support tickets, and spoken to a few people at Linode and no one and nothing has been able to tell me how to do it successfully.

The latest I heard from Linode support was that it is not possible to do this since Amazon S3 does not support SSH.

please note that I do not want to copy the data to my local machine before pushing it to S3. I just want to copy directly from Linode to S3.

Is this possible?

Can you please give me detailed instructions of how would I do it?

Thanks

Lance


Top
   
 Post subject:
PostPosted: Sat Apr 21, 2012 1:55 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
The actual raw bit-for-bit image? I'd probably boot into recovery mode (Finnix), then try to use s3cmd to copy /dev/xvda to S3. If it refuses to do so, I'd probably use boto directly.

For a 20 GB image with the default network profile, this should take no less than an hour.

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Sat Apr 21, 2012 6:27 pm 
Offline
Junior Member

Joined: Mon Jun 13, 2011 8:11 pm
Posts: 23
Hi Hoopycat - thanks for the response. I don't know what you mean by bit-for-bit image. I basically want a backup offsite for my whole server so that I can easily spin up a server from it.

HOw do you use s3cmd to copy from Linode to S3? how would you issue this command? Are you sure this is possible to do in rescue mode?

thanks


Top
   
 Post subject:
PostPosted: Sun Apr 22, 2012 12:31 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
My usual approach is to use blueprint, etckeeper, and/or [url=http://www.opscode.com/]chef to summarize the differences between a standard OS image and my ideal system, and duplicity to back up data. But, this does take some planning and forethought, and restores will also take some planning. On the other hand, it is much easier to move to other providers and architectures. (See also tarsnap, for something I haven't used but that looks nice.)

For sending stuff to S3, I find it's easier to use whatever you normally use to send stuff to S3. For me, it's s3cmd, but there's probably others out there.

In the interest of science, I just deployed a fresh Linode (with a 10 GB disk image -- I'm not made of money here, yo), booted up Rescue mode, and ssh'd to lish. Long story short, I couldn't make it work. My first attempt was to install s3cmd (apt-get update; apt-get install s3cmd; s3cmd --configure) and try to put the file. It returned immediately, having done nothing:

Code:
root@hvc0:~# s3cmd mb s3://awesome-bucket-of-science
Bucket 's3://awesome-bucket-of-science/' created
root@hvc0:~# s3cmd put /dev/xvda s3://awesome-bucket-of-science/disk.img
root@hvc0:~#


So I installed Boto 2.0 from the repository (apt-get install python-boto) and tried to upload that way. It, too, failed, but after doing much more:

Code:
root@hvc0:~# python
Python 2.7.2+ (default, Aug 16 2011, 07:03:08)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from boto.s3.connection import S3Connection
>>> conn = S3Connection('<aws access key>', '<aws secret key>')
>>> bucket = conn.create_bucket('awesome-bucket-of-science')
>>> from boto.s3.key import Key
>>> k = Key(bucket)
>>> k.key = 'disk.img'
>>> k.set_contents_from_filename('/dev/xvda')
... a long pause here ...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 713, in set_contents_from_filename
    policy, md5, reduced_redundancy)
  File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 653, in set_contents_from_file
    self.send_file(fp, headers, cb, num_cb, query_args)
  File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 535, in send_file
    query_args=query_args)
  File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 423, in make_request
    override_num_retries=override_num_retries)
  File "/usr/lib/python2.7/dist-packages/boto/connection.py", line 618, in make_request
    return self._mexe(http_request, sender, override_num_retries)
  File "/usr/lib/python2.7/dist-packages/boto/connection.py", line 584, in _mexe
    raise e
socket.error: [Errno 32] Broken pipe


I suspect it is dying when trying to find the mimetype and MD5 hash of /dev/xvda. So, I installed a newer version of Boto which has a set_contents_from_stream method to skip this:

Code:
root@hvc0:~# apt-get install python-pip
root@hvc0:~# pip install boto --upgrade
...
root@hvc0:~# python
>>> import boto
>>> conn = boto.connect_s3('<aws access key>', '<aws secret key>')
>>> bucket = conn.create_bucket('awesome-bucket-of-science')
>>> from boto.s3.key import Key
>>> k = Key(bucket)
>>> k.key = 'disk.img'
>>> fp = open('/dev/xvda', 'rb')
>>> k.set_contents_from_stream(fp)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 757, in set_contents_from_stream
    % provider.get_provider_name())
boto.exception.BotoClientError: BotoClientError: s3 does not support chunked transfer


So, nope. I think it can certainly be made to work, but I've spent an hour on this and couldn't get it to upload my /dev/xvda, so it's your turn to play around with it for awhile! -rt

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Mon Apr 23, 2012 5:21 pm 
Offline
Junior Member

Joined: Mon Jun 13, 2011 8:11 pm
Posts: 23
Hi again hoopycat,

Thanks a lot for giving it a good shot. I've spent hours trying to get this to work and have not found a way to do it. Trust me, I tried very hard prior to putting a request on here.

Perhaps someone else may have a solution?


Top
   
 Post subject:
PostPosted: Mon Apr 23, 2012 6:56 pm 
Offline
Senior Member
User avatar

Joined: Sun Dec 27, 2009 11:12 pm
Posts: 1038
Location: Colorado, USA
Still don't understand "why?" you want to do this.

Data should already be getting backed up nightly (or more often depending on the data).

Starting a fresh VPS from scratch should be scripted out (or if a once in a while process, documented THOROUGHLY).

In the case you need to spin up a brand new Linode, it will be way faster to start a fresh VPS, run the setup scripts (or do so manually from your config documentation), and restore the data then it will be to prep a new VPS, setup the empty partition, and copy back a boatload (i.e. 20GB) of image data.

I don't see why a proprietary image of Linode's VPS setup stored offsite is that much of an asset.


Top
   
 Post subject:
PostPosted: Tue Apr 24, 2012 9:57 am 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
The proper approach from a Linode perspective is probably to store custom data (like a tarball of your web root, or a latest backup of the databases, or whatnot) somewhere that you can pull it down (like S3, or a "master" linode), and then write a stack script that gets the right packages and config settings going, then pulls down the tarball containing the necessary custom files; this is very simple to do.

When that's done spinning up a new linode is as simple as just creating a new linode and selecting the stackscript, wait a few minutes and poof, out pops a fully configured and ready-to-go linode.

There are, of course, other solutions (the cat often suggests Chef, I believe), but for a relatively simple setup, writing your own stack script is probably the easiest thing since it requires no infrastructure (since Linode provides it already).


Top
   
 Post subject:
PostPosted: Tue Apr 24, 2012 1:46 pm 
Offline
Junior Member

Joined: Mon Jun 13, 2011 8:11 pm
Posts: 23
Guspaz wrote:
The proper approach from a Linode perspective is probably to store custom data (like a tarball of your web root, or a latest backup of the databases, or whatnot) somewhere that you can pull it down (like S3, or a "master" linode), and then write a stack script that gets the right packages and config settings going, then pulls down the tarball containing the necessary custom files; this is very simple to do.

When that's done spinning up a new linode is as simple as just creating a new linode and selecting the stackscript, wait a few minutes and poof, out pops a fully configured and ready-to-go linode.

There are, of course, other solutions (the cat often suggests Chef, I believe), but for a relatively simple setup, writing your own stack script is probably the easiest thing since it requires no infrastructure (since Linode provides it already).


The reason is that I want to shut down my linode for a while. I'm not using it now and I'm not sure if I'm going to. But in case I do want to have it back I want an easy way to store it and bring it back at some point down the road. I don't want to keep paying $20/mo if I'm not using it.


Top
   
 Post subject:
PostPosted: Tue Apr 24, 2012 1:48 pm 
Offline
Junior Member

Joined: Mon Jun 13, 2011 8:11 pm
Posts: 23
Guspaz wrote:
The proper approach from a Linode perspective is probably to store custom data (like a tarball of your web root, or a latest backup of the databases, or whatnot) somewhere that you can pull it down (like S3, or a "master" linode), and then write a stack script that gets the right packages and config settings going, then pulls down the tarball containing the necessary custom files; this is very simple to do.

When that's done spinning up a new linode is as simple as just creating a new linode and selecting the stackscript, wait a few minutes and poof, out pops a fully configured and ready-to-go linode.

There are, of course, other solutions (the cat often suggests Chef, I believe), but for a relatively simple setup, writing your own stack script is probably the easiest thing since it requires no infrastructure (since Linode provides it already).


I guess if I have no idea how to write a script, I'm at a loss. I have no background in sysadmin or programming. Just simple folk.


Top
   
 Post subject:
PostPosted: Thu Apr 26, 2012 2:00 pm 
Offline
Junior Member

Joined: Mon Jun 13, 2011 8:11 pm
Posts: 23
Anyone?


Top
   
 Post subject:
PostPosted: Thu Apr 26, 2012 2:59 pm 
Offline
Senior Member
User avatar

Joined: Sun Dec 27, 2009 11:12 pm
Posts: 1038
Location: Colorado, USA
Then do it the "old fashion" way, document it.

Make a step by step documentation (fresh vps, install Apache, install PHP, install ....)

Document the config setups (and make a copy of them to a thumb drive).

And of course your data should be backed up already (and document that process as well).

Then if you need to do a bare metal restore, you have the step by step process (with examples and copies on your thumb drive) to do so.

The key to this method is not to skip ANY step. What seems blatantly obvious at this point in time will be a muddled faint memory 6 weeks/months/etc from now.


Top
   
 Post subject:
PostPosted: Mon May 14, 2012 9:32 am 
Offline
Junior Member

Joined: Mon Jun 13, 2011 8:11 pm
Posts: 23
anyone else know how to do this easily?


Top
   
 Post subject:
PostPosted: Mon May 14, 2012 9:54 am 
Offline
Senior Member
User avatar

Joined: Sat Feb 25, 2012 4:44 pm
Posts: 71
Website: http://inhomeitsupport.com
Tarsnap is the way to go. It uses amazon s3


Top
   
PostPosted: Sun Jun 10, 2012 1:24 pm 
Offline
Junior Member

Joined: Mon Jun 13, 2011 8:11 pm
Posts: 23
has anyone at linode set up something to do this easily yet?


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 8 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group