As far as incremental diff strategy goes, what I do is something like:
Day 1: Full backup.
Day 2: Incremental against Day 1
Day 3: Incremental against Day 2
Day 4: Incremental against Day 1
Day 5: Incremental against Day 4
Day 6: Incremental against Day 1
(and so forth -- I actually do it every 0.7 days, but you get the idea)
Basically, ensure that the distance from the full backup to your most recent incremental is reasonably short. This will make your incremental backups larger, more often than not, but it will reduce the amount of work needed to restore. Also, if one component of the backup gets corrupted or deleted, you've got a better shot of not losing everything.
Bandwidth is cheap, storage is cheap, but neither your data nor your time are. Have a backup strategy that works, is automatic, assures you that everything is up to date, and has a restore method you know how to use. And practice a restore... grab a 360 for the day and restore to it. It'll cost you a buck or two, but you'll sleep better.
And if you're me, you'll find out why backing up to home really sucks for full restores
EDIT: And I might as well plug my personal backup methods:
0. Linode's backup service (ideal for full restores, not to be relied upon yet)
1. BackupPC on my home server (ideal for full LAN restores and single-file restores; stores ~3 months of data with pooling across machines)
2. Keyfobs with tarballs generated by BackupPC and moved off-site monthly (ideal for full restores and sphincter-clenching disasters)
3. Experimental backups from BackupPC to S3 (ideal for full restores, somewhat more automated than #2 but slow due to upstream bandwidth constraints)
Also, most of my works-in-progress are stored on Dropbox, which is synced across all of my computers and backed up by BackupPC. I use git for revision control and a script I wrote to back up my remote IMAP accounts (gmail, live@edu, etc).
I... think of too many worst-case scenarios.