Distributed Storage Across Two Sites
โData Gravityโ is a real problem in homelabs. But what happens when your data is 40 miles away?
My storage architecture spans two physical locations: my house and dadโs. Different hardware, different purposes, connected by Tailscaleโs mesh VPN.
Site 1: Dadโs House (The Media Fortress)
The Unraid Server
Hardware: Custom build (i5-12400, 32GB RAM) Storage: Unraid Array (3x 18TB Exos) + 1TB NVMe cache Role: Media storage and streaming
Dadโs Unraid server holds the family media library. Plex runs here, serving content to family members across both networks. The arr stack (Sonarr, Radarr) keeps everything organized.
Why at dadโs house? Better upload bandwidth for remote streaming, and he wanted a project. The Unraid UI is approachable enough that he can manage basic tasks without calling me.
The trade-off: Iโm not physically present when things go sideways. Remote debugging via Tailscale is my only option. Sometimes the folder structureโฆ evolves in unexpected ways between my visits.
The Synology (Cassiel-Silo)
Hardware: Synology DS920+ Storage: 4x drives in SHR Role: Critical backups, photo archive
The Synology at dadโs handles family photos via Synology Photos and provides backup storage. DSMโs interface means family members can actually use it without my help.
Site 2: My House (The Build Lab)
My local infrastructure focuses on development:
- Desktop workstation: Primary development machine
- Build swarm: 62 cores for Gentoo compilation
- Local storage: NVMe for active projects
I had a local Synology (Rigel-Silo) for NFS exports to the build swarm, but it died. The drives are sitting on my desk waiting for a replacement enclosure. For now, binary packages serve from local NVMe.
Connecting the Sites
Tailscale makes the 40-mile gap invisible:
# From my desktop (10.42.0.100)
ping 192.168.20.50 # Dad's Unraid
64 bytes from 192.168.20.50: icmp_seq=0 ttl=64 time=38.2 ms
Thirty-eight milliseconds. Not LAN speed, but good enough for management and light file access.
Sync Strategy
Rclone for Cross-Site Sync
Critical data syncs between sites:
#!/bin/bash
# Sync important docs from local to dad's Synology
rclone sync ~/Documents synology-remote:/backup/docs
# Sync everything to cloud (disaster recovery)
rclone sync synology-remote:/backup cloud_crypt:/backup --fast-list --transfers 16
Encrypted Cloud Backup
Google Drive as off-site backup. But I trust no one with raw data.
[gdrive]
type = drive
scope = drive
[cloud_crypt]
type = crypt
remote = gdrive:backups/homelab
password = ********
Every filename and byte is encrypted before leaving my network.
The Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ My House (10.42.0.x) โ โ Dad's House (192.168.20.x) โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Desktop + Build Swarm โ โ โ โ Unraid (Media) โ โ
โ โ (Active Development) โ โ โ โ Synology (Backups) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โ Proxmox (VMs) โ โ
โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโ
โ โ
โโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโผโโโโโโโโ
โ Tailscale โ
โ Mesh VPN โ
โโโโโโโโโฌโโโโโโโโ
โ
โโโโโโโโโผโโโโโโโโ
โ Google Drive โ
โ (Encrypted) โ
โโโโโโโโโโโโโโโโโ
Why This Works
Geographic redundancy. If my house burns down, the media library survives. If dadโs house has issues, my development environment is unaffected.
Appropriate hardware at each site. Dad gets the user-friendly Synology and media-focused Unraid. I get the build swarm and development tools.
Family involvement. Dad can manage basic Unraid tasks. He occasionallyโฆ reorganizes things in creative ways, but thatโs part of the fun.
Tailscale makes it seamless. I can SSH to any machine at either site. Remote debugging works (mostly). The mesh VPN turned two isolated networks into one distributed homelab.
The Challenges
Latency for large transfers. 38ms is fine for SSH and small files. Moving terabytes requires patience or physical travel with hard drives.
Remote debugging. When something breaks at dadโs, I canโt just walk over and check the blinking lights. Screenshots over Tailscale and phone calls become my eyes.
Configuration drift. Changes happen at dadโs site that I donโt always know about. Documenting the โexpected stateโ of remote systems is critical.
Lessons Learned
Donโt assume co-location. Distributed storage is harder but more resilient. Plan for it from the start.
Document remote systems obsessively. When you canโt physically access hardware, documentation is your lifeline.
Choose appropriate hardware for each site. Dad doesnโt need a 62-core build swarm. I donโt need 54TB of media storage locally.
Tailscale changes everything. Before subnet routing, managing two sites was a nightmare of port forwarding and dynamic DNS. Now itโs justโฆ networking.
Two sites, 40 miles apart, functioning as one distributed homelab. Not because itโs easy, but because redundancy matters.