What the heck - I'll throw
Duplicity into the ring of options. I've just recently moved all of my servers over to using Duplicity for backups.
Duplicity takes all the pain out of doing compressed, incremental, and (if you so choose) encrypted backups. I'm currently backing up to Amazon S3 with it, but it can use a number of different storage back-ends, including FTP, local disk, SCP, rsync, etc. I have mine configured to do a full backup on the first day of each month, and then do incremental backups each day in-between the fulls.
Like I said, you have the option of backing up to S3, but if you want a near zero-cost option, just fire up a linux box at home with some disk and then use Duplicity's scp backend option to back up to your home server.
I'm currently using the following script for my duplicity backups. I found this example script posted somewhere online, but for the life of me, I can't locate the original author now.
Code:
!/bin/bash
# Set up some variables for logging
LOGFILE="/var/log/duplicity.log"
DAILYLOGFILE="/var/log/duplicity.daily.log"
HOST=`hostname`
DATE=`date +%Y-%m-%d`
MAILADDR="user@example.com"
# Clear the old daily log file
cat /dev/null > ${DAILYLOGFILE}
# Trace function for logging, don't change this
trace () {
stamp=`date +%Y-%m-%d_%H:%M:%S`
echo "$stamp: $*" >> ${DAILYLOGFILE}
}
# Export some ENV variables so you don't have to type anything
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET"
export PASSPHRASE="YOUR_GPG_PASSPHRASE"
# Your GPG key
GPG_KEY=YOUR_GPG_KEY
# How long to keep backups for
OLDER_THAN="6M"
# The source of your backup
SOURCE=/
# The destination
# Note that the bucket need not exist
# but does need to be unique amongst all
# Amazon S3 users. So, choose wisely.
DEST="s3+http://your.s3.bucket/"
FULL=
if [ $(date +%d) -eq 1 ]; then
FULL=full
fi;
trace "Backup for local filesystem started"
trace "... removing old backups"
duplicity remove-older-than ${OLDER_THAN} ${DEST} >> ${DAILYLOGFILE} 2>&1
trace "... backing up filesystem"
duplicity ${FULL} --volsize=250 --include=/etc --include=/home --include=/root --include=/var/log --include=/var/lib/mailman --exclude=/** ${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1
trace "Backup for local filesystem complete"
trace "------------------------------------"
# Send the daily log file by email
cat "$DAILYLOGFILE" | mail -s "Duplicity Backup Log for $HOST - $DATE" $MAILADDR
# Append the daily log file to the main log file
cat "$DAILYLOGFILE" >> $LOGFILE
# Reset the ENV variables. Don't need them sitting around
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export PASSPHRASE=
You'll obviously need to modify this script to suit your own environment.
If you want/need to periodically back up your linode disk images, you can do that, but you'll need server downtime. This process involves shutting down the server, booting up the finnix recovery image, and then using dd and scp to copy the image down to a local drive. It works, but can take a long time depending on how fast your internet connection is.
A final note - yes, backups are a pain. I don't think you'll find a single sysadmin on this planet that would refute this claim. My view, however, is that ensuring that you have good, reliable backups (and restore procedures) is *the* primary task for a sysadmin. There will surely be some pain points when you're getting backups set up, but you'll certainly learn through the process, and will be able to more quickly get backups set up for any server you work on in the future.