Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Sun Feb 11, 2007 8:24 pm 
Offline
Senior Member

Joined: Sat Mar 19, 2005 3:29 pm
Posts: 50
Website: http://ryantate.com
Location: Berkeley, CA
I am having a good experience backing up my key Linode files (including all of /usr/local, /etc/, /var/log and /home) using s3sync, a ruby script that uploads files to Amazon S3, where you pay, I believe, 15 cents per gigabyte per month for storage and 20 cents per gigabyte for transfer bandwidth.

I already had an s3 account for JungleDisk, a Windows app I use to back up my home PC. You can sign up for one at Amazon.

I got s3sync set up using this tutorial:

http://blog.eberly.org/2006/10/09/how-a ... ng-s3sync/

and the README:

http://s3.amazonaws.com/ServEdge_pub/s3sync/README.txt

Note: If you use the verbatim SSL cert provided in the README (and referenced, I believe, in the tutorial), you need to name it "f73e89fd.0", which is not mentioned anywhere but the main Amazon website thread on s3sync:

http://developer.amazonwebservices.com/ ... 5&tstart=0

I have a shell script that calls s3sync on various directories, which I then invoke via cron, weekly for now. For my public web directory, I add the --public-read option, which lets me access those files from an URL that looks like:

mybucketname.s3.amazonaws.com

but this can be aliased to, for example, www2.mydomain.com, according to Amazon docs, though I have not tried this.

The only downside is that s3sync does not support putting items in at root level, there always has to be a prefix, if only a slash, so you can't make a perfect mirror of your site, www.mydomain.com/foo.mov would be mybucket.s3.amazonaws.com//foo.mov (or www2.mydomain.com//foo.mov). Hopefully this will be fixed in the future.

I still backup to my home pc using rdiff-backup as detailed elsewhere on this board. But it's nice to have redundant backups and the option of easily Webserving large files like video from Amazon's servers rather than my Linode, should I ever have the need.


Top
   
 Post subject:
PostPosted: Wed Mar 12, 2008 9:50 pm 
Offline
Senior Member

Joined: Sun Nov 14, 2004 6:37 pm
Posts: 138
Website: http://oldos.org
WLM: jasonlfaulkner@hotmail.com
Yahoo Messenger: jasonfncsu
AOL: jaylfaulkner
Location: NC, USA
I got mine working using s3 + fuse.

http://code.google.com/p/s3fs/wiki/FuseOverAmazon

Works like a champ on Ubuntu Hardy!

_________________
Jay Faulkner
http://oldos.org


Top
   
 Post subject:
PostPosted: Thu Mar 13, 2008 12:51 am 
Offline
Senior Member

Joined: Sun Nov 14, 2004 6:37 pm
Posts: 138
Website: http://oldos.org
WLM: jasonlfaulkner@hotmail.com
Yahoo Messenger: jasonfncsu
AOL: jaylfaulkner
Location: NC, USA
Okay, the fuse plugin is slow, so I rewrote myself some backup foo to work with s3.

I used the s3cmd packaged with ubuntu hardy. This script assumes you already have s3cmd installed and configured properly.

A couple of TODOs:
* Incremental backup
* Auto-pruning

I'm using this on my linode to great success. Just note you have to have enough free space for "scratch" to compress each chunk you're backing up.

I just print everything to stdout, and since I run the script in cron, it gets emailed to me.

Code:
#!/bin/bash
NOW=`date +%d-%m-%Y`
echo "====Beginning Backup for $NOW===="
echo ""
echo "Creating bucket viagraoldos-$NOW"
s3cmd mb s3://viagraoldos-$NOW

for i in home etc root; do
        echo "==Beginning Backup of /$i=="
        echo ""
        echo "Making a tarball of /$i..."
        tar cfj /tmp/$i.tar.bz2 /$i
        echo "Done!"
        echo ""
        echo "Uploading tarball of /$i to s3..."
        s3cmd put /tmp/$i.tar.bz2 s3://viagraoldos-$NOW/
        echo "Done!"
        echo ""
        echo "Deleting local copy of tarball of /$i..."
        rm -r /tmp/$i.tar.bz2
        echo "Done!"
        echo ""
        echo "==Backup of /$i completed=="
        echo ""
done

for i in backups www; do
        echo "==Beginning Backup of /var/$i=="
        echo ""
        echo "Making a tarball of /var/$i..."
        tar cfj /tmp/var-$i.tar.bz2 /var/$i
        echo "Done!"
        echo ""
        echo "Uploading tarball of /var/$i to s3..."
        s3cmd put /tmp/var-$i.tar.bz2 s3://viagraoldos-$NOW/
        echo "Done!"
        echo ""
        echo "Deleting local copy of tarball of /var/$i..."
        rm -r /tmp/var-$i.tar.bz2
        echo "Done!"
        echo ""
        echo "==Backup of /var/$i completed=="
        echo ""
done

echo "====Backup completed. Objects currently in s3 listed below.===="
s3cmd la

_________________
Jay Faulkner

http://oldos.org


Top
   
 Post subject:
PostPosted: Thu Mar 13, 2008 1:45 pm 
Offline
Junior Member
User avatar

Joined: Sun Sep 19, 2004 7:42 pm
Posts: 27
Website: http://eric.gatenby.org/
Location: New York, NY
Code:
s3cmd mb s3://viagraoldos-$NOW


You should use "s3cmd --ssl" or your AWS secret key is sent in plain text.

I have a script that uses s3sync to sync certain configured directories to a S3 bucket, then uses s3cmd to upload a tar/bz2 of other directories. It uses the --exclude features to not sync certain directories and it also auto-prunes the tar/bz2 files that are older than some number of days.

If you are interested, I will upload it. I just need to spend 5 minutes removing ugly code and important info like my access ID and secret key :)

--Eric


Top
   
PostPosted: Tue May 20, 2008 1:40 am 
Offline
Senior Member

Joined: Sat Dec 04, 2004 5:36 pm
Posts: 145
ryantate wrote:
Note: If you use the verbatim SSL cert provided in the README (and referenced, I believe, in the tutorial), you need to name it "f73e89fd.0", which is not mentioned anywhere but the main Amazon website thread on s3sync:

http://developer.amazonwebservices.com/ ... 5&tstart=0


You can do this to generate appropriate <hex digits>.0 for any cert by doing, via sh/bash:

Code:
$ HASH=`openssl x509 -noout -hash -in ${CERT}`
$ ln -s ${CERT} ${HASH}.0

Where CERT is filename of the SSL certificate itself.


Top
   
 Post subject:
PostPosted: Thu Oct 30, 2008 7:56 am 
Offline
Newbie

Joined: Wed Oct 29, 2008 8:33 pm
Posts: 3
AOL: wtfjstn
Location: boston, ma
I built s3fs http://code.google.com/p/s3fs/wiki/FuseOverAmazon and used duplicity http://duplicity.nongnu.org for simple backups.

i wrote a small script that mounts my s3 account and runs duplicity to make the backup, and then unmounts.

simple and quick, but for larger systems it certainly won't scale.

- j


Top
   
 Post subject:
PostPosted: Wed Nov 05, 2008 1:47 pm 
Offline
Senior Member

Joined: Thu Apr 03, 2008 12:02 am
Posts: 103
AOL: derole
I use s3 & fuse with "simplebackup" (http://sourceforge.net/projects/migas-sbackup) - works perfectly.

When mounting the s3 bucket i got error messages but following the instructions in this thread http://www.linode.com/forums/viewtopic.php?t=3114 helped.

simplebackup allows for incremental and differential backups - a great way to save on space & bandwidth.


Top
   
 Post subject:
PostPosted: Thu Nov 06, 2008 2:06 pm 
Offline

Joined: Thu Nov 06, 2008 1:58 pm
Posts: 1
Hello

You don't need to use Fuse with duplicity and S3.

Duplicity have native support to S3.

Your you can do the backups directly on S3 without mounting any filesystem.

You must install from source latest duplicity version with Boto python library.

Take a look:

http://www.linode.com/forums/viewtopic.php?p=14974

regards

roberto


Top
   
PostPosted: Sun Nov 30, 2008 1:40 am 
Offline
Senior Newbie

Joined: Sun Nov 30, 2008 1:36 am
Posts: 12
Please send me your feedback : hareem.haque@gmail.com

Semi Automated Uploads to S3 bucket. Data offloading

Code:
#!/bin/sh
#This script first asks for a users input. And then it will sync content of a specific directory
#with a S3 bucket.

#Specify all the data related to your AWS Account.
#Please remove the "" prior to entering your credentials.
# PLEASE NOTE THAT I HAVE NEVER TESTED THE SCRIPT WITH AN EU BUCKET.

export AWS_ACCESS_KEY_ID="YOUR ACCESS KEY HERE"
export AWS_SECRET_ACCESS_KEY="YOUR SECRET KEY HERE"

#Enter all required info below. Make sure to double check everything that you specify.

echo -n "What directory would you like to offload to S3 ? "
read bkupdir

echo -n "Please specify your bucket ?"
read Bucket

echo -n "Specify your prefix? "
read prefix

echo -n "Specify log name for this process. Please specify a time input "
read mlog

echo  -n "Specify log dir path. (plese specify a path to your log directory - Please inclde a /): "
read ldir

#Script will ask you all the required questions.
#Please make sure that you specify the path to your s3sync folder and that is has proper permissons.
#S3SYNCDIR = path to s3sync folder. This is where s3cmd.rb is located.
#I have not used the cache control. I don't use it cause caching slows my EC2 AMI.

BACKUPDIR=$bkupdir
SYNCDIR=/path/to/your/s3sync/folder/
S3BUCKET=$Bucket
S3PREFIX=$prefix
S3STORE=${S3BUCKET}:${S3PREFIX}

# move to the ruby sync direcotry where the .yml file is
# Also you can replace $RANDOM with anything you like.

cd ${SYNCDIR}
./s3sync.rb -r -v -p --cache-control="no-cache" ${BACKUPDIR} ${S3STORE} > $ldir/$mlog-$RANDOM.log


Semi Automated Downloads from S3 bucket. Data restoration

Code:

#!/bin/sh
# I have not tested this script with the EU buckets.
#This script will restore data from one bucket/prefix to your specifide folder.

# Please make sure that you credentials are correct. And remove ""
export AWS_ACCESS_KEY_ID="YOUR ACCESS KEY"
export AWS_SECRET_ACCESS_KEY="YOUR SECRET KEY"

#Enter all required info below. Make sure to double check everything that you specify.

echo -n "What directory would you like to restore your data to ? "
read bkupdir

echo -n "Please specify your bucket ?"
read Bucket

echo -n "Specify your prefix? "
read prefix

echo -n "Specify log name for this process. Please specify a time input "
read mlog

echo  -n "Specify log dir path. (plese specify a path to your log directory- Please include a / ): "
read ldir


#Please double check everything.
# Please specify the correct path to your s3sync folder.

BACKUPDIR=$bkupdir
SYNCDIR=/path/to/your/s3sync/folder/
S3BUCKET=$Bucket
S3PREFIX=$prefix
S3STORE=${S3BUCKET}:${S3PREFIX}

# move to the ruby sync direcotry where the .yml file is
cd ${SYNCDIR}
./s3sync.rb -r -v -p --cache-control="no-cache" ${S3STORE} ${BACKUPDIR} $ldir/$mlog-$RANDOM.log





















Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group