Deep Dive: Cloud Backups with Rclone & OpenRC
Local snapshots save you from software failures. Cloud backups save you from hardware failures. This is the story of three failed attempts before I got it right.
The Problem
I have a lot of data I care about:
- Obsidian vaults (years of notes)
- Dotfiles and configurations
- Project repositories
- Family photos
- System snapshots
Local backups are covered. Btrfs snapshots happen hourly. Snapper handles retention. I can roll back my desktop in 2 minutes.
But local backups don’t survive:
- House fires
- Theft
- Simultaneous drive failures
- My own stupidity (
rm -rf /from the wrong directory)
I needed off-site backups. Encrypted. Automated. Fire-and-forget.
The Architecture
Tool: Rclone (rsync for cloud storage)
Strategy: Push-only sync. The cloud is a dump destination. I rarely read from it, but I write to it daily.
Encryption: Rclone’s “crypt” remote overlay. Files are encrypted locally before upload. Google sees obfuscated filenames and encrypted blobs. They cannot scan the content.
Local File → Rclone Encryption → Google Drive
↓
(Only I have the key)
Target: Google Drive (15GB free, cheap paid tiers, decent API)
Attempt #1: The Naive Approach
The plan: Run rclone sync manually when I remember.
rclone sync ~/Documents gdrive:Documents
What happened: I forgot. For three weeks. Then my laptop’s SSD developed bad sectors. The Documents folder was partially corrupted.
The lesson: Manual backups don’t exist. If it’s not automated, it won’t happen.
Attempt #2: The Cron Job
The plan: Cron job. Run daily at 3 AM.
# /etc/cron.d/rclone-backup
0 3 * * * commander rclone sync /home/commander/Documents secret:Documents --log-file /var/log/rclone.log
What happened:
- The job ran at 3 AM
- My laptop was suspended (lid closed)
- Nothing actually ran
- The log file said “success” from two weeks ago
- I didn’t notice until I needed a restore
The lesson: Cron assumes the machine is on. Laptops aren’t servers.
Attempt #3: The Shutdown Hook
The plan: Run backups when I shut down. I always shut down at the end of the day. Perfect trigger.
The implementation: OpenRC shutdown hook.
#!/sbin/openrc-run
# /etc/init.d/rclone-backup
description="Rclone backup on shutdown"
command="/usr/bin/rclone"
command_args="sync /home/commander/Obsidian secret:Obsidian --checksum"
depend() {
need net
}
stop() {
ebegin "Running rclone backup"
${command} ${command_args}
eend $?
}
What happened:
- I ran
shutdown -h now - Network went down
- Then the backup script tried to run
- No network = no backup
- Shutdown completed anyway
OpenRC’s shutdown sequence kills the network before running stop() on my service. By the time rclone tried to upload, there was no route to Google.
The lesson: Shutdown hooks are tricky. Dependency ordering matters, and network goes down early.
Attempt #4: The Working Solution
The revelation: Stop trying to be clever. Just run it before shutdown, not during.
I created a “maintenance mode” command that I run before closing my laptop:
#!/bin/bash
# /usr/local/bin/maintenance
set -e
echo "=== ARGO OS MAINTENANCE MODE ==="
echo ""
# 1. System updates (from binhost)
echo "[1/4] Checking for updates..."
apkg update --quiet
# 2. Create snapshot
echo "[2/4] Creating snapshot..."
snapper -c root create --description "Pre-maintenance $(date +%Y-%m-%d)"
# 3. Sync Obsidian to cloud
echo "[3/4] Syncing Obsidian vault..."
rclone sync /home/commander/Documents/obsidian-main-vault secret:Obsidian \
--checksum \
--delete-during \
--transfers 8 \
--progress
# 4. Sync configs
echo "[4/4] Syncing configurations..."
rclone sync /home/commander/.config secret:Configs \
--checksum \
--exclude "chromium/**" \
--exclude "Code/**" \
--exclude "**/Cache/**" \
--progress
echo ""
echo "=== MAINTENANCE COMPLETE ==="
echo "Safe to shutdown."
Why it works:
- Runs while I’m still at the keyboard
- Network is guaranteed up
- I see the progress
- Errors are visible
- It’s a conscious choice, not a background task I forget about
Usage:
maintenance && shutdown -h now
Or just maintenance if I’m not shutting down.
Rclone Configuration Deep Dive
Step 1: Create Google Drive Remote
rclone config
n(new remote)- Name:
gdrive - Type:
drive(Google Drive) - Client ID: (Use your own from Google Cloud Console—the default shared ones are rate-limited)
- Client Secret: (from Google Cloud Console)
- Scope:
drive(full access) - Follow OAuth flow in browser
Step 2: Create Encrypted Overlay
rclone config
n(new remote)- Name:
secret - Type:
crypt - Remote:
gdrive:encrypted-backups(folder on gdrive where encrypted files go) - Filename encryption:
standard - Directory name encryption:
true - Password: (strong, random, SAVE THIS)
- Salt: (optional second password, also save)
Critical: The passwords are the encryption keys. Lose them, lose your data. I store mine in a password manager AND printed in a safe deposit box.
The Config File
# ~/.config/rclone/rclone.conf
[gdrive]
type = drive
client_id = YOUR_CLIENT_ID.apps.googleusercontent.com
client_secret = YOUR_SECRET
scope = drive
token = {"access_token":"...","token_type":"Bearer",...}
team_drive =
[secret]
type = crypt
remote = gdrive:encrypted-backups
password = ENCRYPTED_PASSWORD_HASH
password2 = ENCRYPTED_SALT_HASH
What Gets Backed Up
Obsidian vaults (critical):
- All three vaults
- Daily sync
- ~500MB total
Dotfiles (important):
~/.config/(excluding browser caches)~/.local/share/(application data)~/.ssh/(encrypted separately, stored offline too)
Btrfs snapshots (weekly):
- Full system snapshots via
btrfs send - Compressed with zstd
- Huge files, but incremental
What Doesn’t Get Backed Up
Binary caches:
/var/cache/binpkgs/(can be rebuilt)- Browser caches (can be rebuilt)
- Node modules, target directories, etc.
Media:
- Movies, TV shows (stored on NAS, not cloud)
- Replaceable downloads
Secrets:
- SSH keys go to a separate, more paranoid backup
- GPG keys same
- Never in the same place as regular data
The Btrfs Send Pipeline
For full system backup, I send Btrfs snapshots to the cloud:
#!/bin/bash
# /usr/local/bin/backup-system
SNAPSHOT_DIR="/.snapshots"
BACKUP_DIR="/tmp/btrfs-backup"
REMOTE="secret:system-backups"
# Find latest snapshot
LATEST=$(ls -t "$SNAPSHOT_DIR" | head -1)
SNAP_PATH="$SNAPSHOT_DIR/$LATEST/snapshot"
echo "Backing up snapshot: $LATEST"
# Create parent-based incremental send (if parent exists)
PARENT=$(ls -t "$SNAPSHOT_DIR" | sed -n '2p')
if [ -n "$PARENT" ]; then
PARENT_PATH="$SNAPSHOT_DIR/$PARENT/snapshot"
btrfs send -p "$PARENT_PATH" "$SNAP_PATH" | \
zstd -T0 -3 > "$BACKUP_DIR/incremental-$LATEST.btrfs.zst"
else
btrfs send "$SNAP_PATH" | \
zstd -T0 -3 > "$BACKUP_DIR/full-$LATEST.btrfs.zst"
fi
# Upload
rclone copy "$BACKUP_DIR/" "$REMOTE/" --progress
# Cleanup local temp
rm -f "$BACKUP_DIR"/*.btrfs.zst
echo "Backup complete: $LATEST"
Incremental sends: Btrfs only sends the differences between snapshots. A weekly snapshot might only be a few hundred MB of changes, not the full 50GB system.
Compression: zstd at level 3 is fast and effective. Higher levels save space but take forever.
Restoration
Obsidian restore (common):
rclone sync secret:Obsidian ~/Documents/obsidian-restored/
Full system restore (rare, tested once):
# Boot from live USB
# Create fresh Btrfs filesystem
# Restore snapshot
zstd -d full-20260115.btrfs.zst -c | btrfs receive /mnt/newroot/
# Fix fstab, reinstall bootloader, pray
I tested this once on a spare drive. It worked. I hope I never need it for real.
Monitoring
How do I know backups are actually happening?
Option 1: Check the log
tail -20 /var/log/rclone.log
Option 2: Check remote timestamps
rclone lsl secret:Obsidian --max-depth 1 | head -5
Option 3: Uptime Kuma
I have an Uptime Kuma instance that pings a webhook URL. The maintenance script hits the webhook on success:
# At end of maintenance script
curl -fsS -m 10 --retry 5 "https://uptime.example.com/api/push/backup-token?status=up&msg=OK" > /dev/null
If I don’t run maintenance for 48 hours, Uptime Kuma alerts me.
Cost Analysis
Google Drive:
- 15GB free tier: Enough for Obsidian vaults and configs
- 100GB tier ($2/month): Enough for system snapshots
- 2TB tier ($10/month): Enough for everything including media
I use the 100GB tier. $24/year for encrypted off-site backups. Worth it.
Alternatives I considered:
| Service | Pros | Cons |
|---|---|---|
| Backblaze B2 | Cheap ($0.005/GB) | More complex API |
| AWS S3 Glacier | Very cheap for archives | Slow retrieval |
| Wasabi | No egress fees | Less mature |
| Self-hosted (remote NAS) | No monthly cost | Not truly off-site |
Google Drive won because I already had an account, the API is good, and rclone supports it natively.
The Philosophy
Backups exist for the disaster you haven’t imagined yet.
Local snapshots protect against “I deleted the wrong file” and “the update broke my system.”
Remote backups protect against “the house burned down” and “someone stole my laptop.”
Encryption protects against “Google got hacked” and “I left my laptop on a train.”
All three layers. Always.
Current Status
What’s automated:
- Btrfs snapshots (hourly, via snapper)
- Obsidian sync to cloud (daily, via maintenance script)
- Config sync to cloud (daily, via maintenance script)
- System snapshots to cloud (weekly, manual trigger)
What’s manual:
- Running the maintenance script
- Weekly full system backup trigger
What I still want:
- Automatic daily trigger (maybe via systemd timer, since I use OpenRC… need to figure out anacron)
- Better monitoring (dashboard showing last backup time per category)
- Restore testing (quarterly fire drill)
The system is good enough. It’s not perfect. But “good enough and running” beats “perfect and never implemented.”
Related: Argo OS Journey Part 3: The Cloud Layer, Btrfs & Snapper Guide.