← All Articles

Homelab Backup Strategy: 3-2-1 Without the Hassle

A practical backup strategy for your homelab that actually gets done — what to back up, where to put it, and how to automate it without overcomplicating it.

getting-startedbackupsdocker

Most homelab backup strategies exist as good intentions. You know you should do backups. You think about doing backups. And then a drive fails and you discover that thinking about backups and actually having backups are two very different things.

I’ve been there. Here’s what I run now, why it’s designed the way it is, and how to get it working without turning it into a project that takes longer than the homelab itself.

The 3-2-1 Rule, Quickly

3 copies of your data. 2 different storage types. 1 copy offsite.

That’s it. The rule isn’t complicated — the hard part is actually implementing it. Most homelab setups fail at “1 copy offsite” because offsite backup requires either a service subscription or a second physical location, both of which involve friction.

We’ll get to that. First, figure out what actually needs backing up.

What to Back Up

Not everything needs the same treatment. In a Docker-based homelab, there are two categories:

Critical: Container config directories

These are small and irreplaceable. If you lose them, you’re not just reinstalling software — you’re rebuilding configuration, regenerating user data, and potentially losing years of accumulated state.

What this looks like in practice:

These are typically small — a few gigabytes total for most setups. Back them up daily. Keep two weeks of history.

Important: Media and large data

Photos, videos, documents, music — stuff that exists somewhere else (your phone, a camera card, the original source) but that you’ve organized on the homelab. Back this up, but on a different schedule and with different priorities. Weekly is usually fine. These are harder to lose completely because the originals usually exist somewhere.

Skip: Recreatable things

Downloaded content (movies, TV shows you can re-download), Docker image layers, OS installs, anything you could rebuild from scratch in under an hour. Don’t waste backup space on these.

Copy 1: Local Backup on the Same Server

The easiest backup to implement. A cron job that tarballs your config directories and writes them to a second location on the same machine.

#!/bin/bash
BACKUP_DIR="/backup/daily"
DATE=$(date +%Y%m%d)

mkdir -p "$BACKUP_DIR"

# Back up each service config directory
for service in vaultwarden pihole uptime-kuma; do
    tar -czf "$BACKUP_DIR/${service}-${DATE}.tar.gz" "/opt/${service}/"
done

# Keep 14 days, delete older backups
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +14 -delete

Save this as /opt/scripts/backup.sh, make it executable (chmod +x), and schedule it:

crontab -e

Add:

0 3 * * * /opt/scripts/backup.sh

This runs at 3 AM every day. That’s copy 1. It protects against accidental deletion and file corruption — not against drive failure, because it’s on the same hardware.

Copy 2: Local Backup on Different Hardware

Copy 2 goes to different physical media. Options in order of convenience:

USB drive or external hard drive — Plug it in, mount it, point your backup script at it. A 1TB USB drive costs $25 and holds years of homelab config backups. Mount it persistently via /etc/fstab.

Network-attached storage (NAS) — If you have a NAS on your network (a Synology, TrueNAS box, or even another server), rsync your backup directory there.

Second server — If you have more than one machine in your homelab, back up each to the other. Symmetric offsite-ish protection within your local network.

Update your backup script to also sync to the second location:

rsync -av --delete /backup/daily/ /mnt/backup-drive/homelab/

Now you have 2 copies on 2 different storage devices. Still need the offsite copy.

Copy 3: Offsite Backup

This is where most homelab backups fail. Offsite means the backup survives a fire, flood, or theft at your location. That requires getting data out of your house.

Two approaches that work:

Cloud object storage (Backblaze B2 or Wasabi)

Both are cheap: Backblaze B2 is $6/TB/month, Wasabi is $7/TB/month. For homelab config directories (usually under 10GB), you’re spending less than $1/month.

Use rclone to sync your backups to the cloud:

# Install rclone
curl https://rclone.org/install.sh | sudo bash

# Configure it
rclone config

Follow the prompts to set up a Backblaze B2 remote. Then add a sync step to your backup script:

rclone sync /backup/daily/ b2:your-bucket-name/homelab/ --transfers=4

Encrypt with rclone before uploading — you don’t want your Vaultwarden data sitting unencrypted in a cloud bucket:

# Set up an encrypted remote on top of your B2 remote
rclone config
# Choose 'crypt', point at your B2 remote

A friend or family member’s house

The old-school approach. Drop a drive at someone’s place and swap it periodically. Works, costs nothing, requires a trusted person and remembered swaps. Most people don’t actually maintain this.

Cloud storage is easier.

Automating the Full Stack

Final backup script that covers all three copies:

#!/bin/bash
set -e

BACKUP_DIR="/backup/daily"
REMOTE_DIR="/mnt/backup-drive/homelab"
DATE=$(date +%Y%m%d)

mkdir -p "$BACKUP_DIR"

# Local copy
for service in vaultwarden pihole uptime-kuma immich; do
    tar -czf "$BACKUP_DIR/${service}-${DATE}.tar.gz" "/opt/${service}/" 2>/dev/null || true
done

# Remove backups older than 14 days
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +14 -delete

# Copy 2: external drive
rsync -av --delete "$BACKUP_DIR/" "$REMOTE_DIR/"

# Copy 3: cloud (rclone to encrypted B2 remote)
rclone sync "$BACKUP_DIR/" "b2-crypt:homelab/"

echo "Backup completed: $(date)"

Testing Your Backups

A backup you’ve never tested is a hypothesis, not a backup.

Once a month: pick one backup archive and restore it to a temp directory. Verify the files are actually there and readable.

tar -tzf /backup/daily/vaultwarden-20260410.tar.gz | head -20

Once a quarter: actually restore one service from backup to a test location and verify it works. This is the only way to know your backup strategy is actually sound.

Proxmox Snapshots (If You’re Running VMs)

If your homelab runs on Proxmox, VM snapshots are an additional safety net — not a replacement for proper backups, but useful for quick rollbacks before updates.

The Proxmox snapshots guide covers the specifics.

The Honest Assessment

This strategy takes a few hours to set up and then runs itself. The ongoing maintenance is checking that the cron job ran (look at file timestamps occasionally) and testing a restore periodically.

What it protects against: accidental deletion, drive failure, software corruption, ransomware (if your offsite backup doesn’t get encrypted too), losing your house.

What it doesn’t protect against: catastrophic failure of your offsite service, a corrupt backup you didn’t notice for months, services that need special export procedures (Immich’s Postgres database needs a pg_dump, not just a directory copy).

For each service, read the backup documentation. The database services in particular (anything using Postgres or MySQL) need a database dump, not just a volume copy. The per-service guides cover this for each tool.

Start simple. Get copy 1 running this week. Add copy 2 on a drive you probably already own. The offsite copy can come last — something is better than nothing.