12 Years of Homelab Evolution: From Seedbox to Massive Storage

12 Years of Homelab Evolution: From Seedbox to Massive Storage

I've been running a homelab for over 12 years now.

What started as a Raspberry Pi running seedbox scripts has evolved into a multi-site infrastructure with massive storage, distributed compilation clusters, and a custom Linux distribution. Along the way, I've broken things spectacularly, learned painful lessons, and accidentally built something that actually works.

This is the timeline.


The Eras

Era Years Hypervisor Storage Network Defining Moment
Seedbox 2012-2014 None USB drives Flat "It works!"
ESXi 2016-2019 VMware iSCSI VLANs "Why is licensing so expensive?"
Proxmox 2019-2023 Proxmox VE ZFS/Btrfs Tailscale "This is actually free?"
Argo OS 2023-Now Proxmox + Bare Metal Distributed Mesh "I should build my own distro."

Era 1: The Seedbox Days (2012-2014)

The Setup:

  • Raspberry Pi Model B (256MB RAM)
  • USB 2.0 external drive
  • rtorrent + rutorrent
  • OpenVPN to access from work

What I learned: Linux can run on anything. SSH is magic. Automation beats manual work.

The Pi sat in a corner of my apartment, silently downloading. It cost $35 and used 2.5 watts. I could access it from anywhere via a clunky OpenVPN tunnel that dropped every 20 minutes.

It was terrible by modern standards. It was also the start of everything.

The failure that ended this era: The USB drive failed. No backups. Lost everything. The Pi itself was fine—it just had nothing to serve.

Lesson: Storage is not optional.


Era 2: The ESXi Years (2016-2019)

I got serious. Bought actual server hardware.

The Hardware:

  • Dell PowerEdge R710 (eBay, $400)
  • 32GB ECC RAM
  • iSCSI SAN (Synology DS1815+)
  • 8x 4TB drives in RAID 6

The Stack:

  • VMware ESXi 6.5
  • Windows Server 2016 (AD, DNS, DHCP)
  • Ubuntu VMs for everything else
  • VLANs for isolation (management, storage, DMZ)

This was "proper" infrastructure. I learned Active Directory, group policies, proper networking with VLANs, SAN protocols, and enterprise monitoring. I felt like a real sysadmin.

The costs:

  • vCenter license: $0 (evaluation mode, technically)
  • Time spent fighting vSphere: Infinite
  • Electricity bill: +$50/month (that R710 was hungry)

What I learned: Enterprise software is powerful and complex. VLANs are essential for security. SAN storage is rock solid but expensive. Windows Server exists (I don't miss it).

The failure that ended this era: VMware announced changes to their free tier. The writing was on the wall—ESXi for homelabs was going to get harder. Plus, I was tired of working around licensing restrictions.

Also, the R710's fans sounded like a jet engine. My wife threatened divorce.


Era 3: The Proxmox Migration (2019-2023)

The Revelation: Everything ESXi did, Proxmox did for free. With better Linux support. And no licensing anxiety.

The Migration:

Week 1: Set up Proxmox test node
Week 2: Migrated non-critical VMs
Week 3: Migrated critical VMs
Week 4: Decommissioned ESXi
Week 5: Sold the R710 on eBay

The New Hardware:

  • Dell OptiPlex 7050 SFF (silent, 35W idle)
  • 64GB DDR4 RAM
  • NVMe boot + SATA SSD storage
  • Same Synology NAS (kept the investment)

The Stack:

  • Proxmox VE 7.x → 8.x
  • LXC containers (lighter than VMs)
  • ZFS for local storage (snapshots!)
  • Tailscale for networking (goodbye OpenVPN)

The Game Changer: LXC Containers

VMs are heavy. Each one needs its own kernel, its own memory reservation, its own disk image.

LXC containers share the host kernel. They're lighter, faster, and use less resources. For Linux workloads, they're almost always the right choice.

Workload VM LXC Winner
File server 2GB RAM 256MB RAM LXC
Docker host 4GB RAM 1GB RAM LXC
Windows Required N/A VM
Untrusted code Safer Risky VM

I went from running 8 VMs to running 3 VMs + 15 LXC containers. Same hardware. More services.

What I learned: Open source wins. Containers beat VMs for most workloads. ZFS snapshots are life-saving. Tailscale is magic.

The failure that almost killed me: A Proxmox upgrade broke ZFS. 16 hours of recovery. I now test upgrades in a VM first.


Era 4: The Argo OS Era (2023-Present)

At some point, "homelab" became "distributed infrastructure."

The Current Inventory:

Hypervisors

Host Hardware Role Location
Arcturus-Prime Dell OptiPlex 7050 Primary Proxmox Local
Altair-Link Dell OptiPlex 7040 Secondary Proxmox Local
Tarn-Host HP ProDesk 600 Remote Proxmox Remote site

Storage

Device Capacity Type Role
Synology DS1821+ Primary Array NAS Local backup, media
Unraid (Meridian-Mako-Silo) Archive Array NAS Remote archive, Plex
Synology DS920+ Backup Array NAS Remote backup
Various NVMe High Speed Direct VM storage

Total usable storage: Massive

Compute

Machine CPU RAM Role
Capella-Outpost i7-4790K 32GB Daily driver (Argo OS)
Tau-Beta i7-4771 32GB Build drone, testing
Various VMs - ~128GB total Services

Network

The mesh: Tailscale connects everything. Local is 10.42.0.0/24. Remote is 192.168.x.0/24. Tailscale makes them feel like one network.

Key services:

  • Subnet routing from LXC containers
  • Exit nodes for mobile devices
  • ACLs for access control
  • Auto-approvers for route advertisement

Why Tailscale? I spent 16 months fighting port forwarding, dynamic DNS, and VPN tunnels that dropped. Tailscale just works. 38ms latency to remote sites. No holes in my firewall.


The Storage Philosophy

After losing data twice (USB drive in 2014, Proxmox ZFS scare in 2021), I'm paranoid about storage.

The 3-2-1 Rule

  • 3 copies of important data
  • 2 different storage types
  • 1 offsite

Implementation:

  1. Primary: Local NAS (Synology, RAID/SHR)
  2. Secondary: Remote NAS (replicated overnight)
  3. Tertiary: Cloud (encrypted, rclone to Google Drive)

Filesystem Choices

Use Case Filesystem Why
Proxmox VM storage ZFS Snapshots, compression, reliability
NAS arrays Btrfs/ext4 on Synology It's what Synology uses
Unraid XFS + parity Unraid architecture
Desktop (Argo OS) Btrfs Snapshots for rollback

The Upgrade Cycle

I buy used enterprise drives. 8TB HGST drives are $40-60 on eBay. They have SMART data you can check, and they've survived a datacenter burn-in.

My rule: If a drive has >50,000 power-on hours, I only use it for non-critical data. If it has reallocated sectors, I don't use it at all.


The Compilation Problem

This is where Argo OS was born.

I run Gentoo on my desktop. Gentoo compiles everything from source. A full system update can take 48 hours on an i7-4790K.

The solution: Don't compile on the desktop. Compile on everything else.

The Build Swarm

Drone Cores Location Role
drone-Izar 16 Local VM Primary
drone-Tarn 14 Remote VM Secondary
drone-Meridian 24 Docker on Unraid Heavy lifter
Tau-Beta 8 Bare metal Tau-Beta Backup

Total: 66 cores for parallel compilation.

How it works:

  1. Orchestrator receives package list
  2. Distributes packages to drones based on availability
  3. Drones compile and upload binary packages
  4. My desktop downloads binaries instead of compiling

Result: Firefox update went from 45 minutes (compile) to 30 seconds (binary install).


The Lessons (Chronological)

2012: Backups Are Not Optional

Lost a USB drive. Lost everything on it. Never again.

2014: Redundancy Means Something

RAID is not backup. But RAID protects against drive failure. Both are necessary.

2016: Enterprise Gear Is Loud

The R710 was powerful. It was also 80dB at idle. Your family will complain.

2017: VLANs Are Essential

Without network segmentation, one compromised device can access everything. VLANs fixed that.

2018: Licensing Is a Trap

Free tiers disappear. Evaluation modes expire. Open source is forever.

2019: Containers Beat VMs

For Linux workloads, LXC uses 1/4 the resources of a full VM. Docker on LXC uses even less.

2020: Tailscale Changes Everything

16 months of VPN pain erased in one afternoon. Mesh networking is the future.

2021: Test Your Backups

I had backups. I'd never tested restoring them. The Proxmox ZFS scare taught me: an untested backup is not a backup.

2022: Snapshots Are Undo Buttons

Btrfs snapshots on my desktop mean I can break anything and rollback in 2 minutes. This changes how aggressively I can experiment.

2023: Compile Once, Deploy Everywhere

Building a binhost means I never compile the same package twice. 66 cores working while I sleep.

2024: Your Distro Is Yours

Argo OS exists because I wanted exactly the system I have. Not Ubuntu's choices. Not Arch's choices. Mine.


The Current Architecture

┌─────────────────────────────────────────────────────────────┐
│                      TAILSCALE MESH                         │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  LOCAL (Milky Way)              REMOTE (Andromeda)          │
│  10.42.0.0/24                   192.168.x.0/24              │
│                                                             │
│  ┌─────────────────┐           ┌─────────────────┐          │
│  │ Arcturus-Prime  │           │ Tarn-Host    │          │
│  │ Proxmox         │           │ Proxmox         │          │
│  │ 15 containers   │           │ 8 containers    │          │
│  └────────┬────────┘           └────────┬────────┘          │
│           │                             │                   │
│  ┌────────┴────────┐           ┌────────┴────────┐          │
│  │ Rigel-Silo      │           │ Meridian-Mako-Silo     │          │
│  │ (OFFLINE)       │           │ Unraid Archive  │          │
│  └─────────────────┘           └─────────────────┘          │
│                                                             │
│  ┌─────────────────┐           ┌─────────────────┐          │
│  │ Capella-Outpost │           │ Cassiel-Silo       │          │
│  │ Argo OS Desktop │           │ Synology Backup │          │
│  └─────────────────┘           └─────────────────┘          │
│                                                             │
│  ┌─────────────────┐                                        │
│  │ Tau-Beta    │                                        │
│  │ Build drone     │                                        │
│  └─────────────────┘                                        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

The Cost Analysis

Initial Investment (2016-2019 ESXi era):

  • R710: $400
  • RAM upgrades: $150
  • Drives: $800
  • Synology: $800
  • Total: ~$2,150

Current Investment (2019-present):

  • OptiPlex 7050: $200 (used)
  • OptiPlex 7040: $150 (used)
  • Various RAM: $300
  • Drives (accumulated): ~$2,000
  • Synology additions: $500
  • Unraid license: $130
  • Remote Synology: $600
  • Total: ~$3,880

Running costs:

  • Electricity: ~$30/month (down from $80 with the R710)
  • Tailscale: Free tier
  • Google Drive (backup): $10/month
  • Internet: Already paying for it

Total 12-year investment: ~$6,000 + ~$4,000 in electricity = $10,000

What I got:

  • Redundant storage array
  • Distributed compilation cluster
  • Custom Linux distribution
  • Skills that got me promoted twice
  • A hobby that never gets boring

Worth it.


What's Next?

Short term:

  • Finish Argo OS Part 5 (the apkg package manager)
  • Upgrade to 10GbE between local nodes
  • Add a dedicated GPU node for AI workloads

Long term:

  • Kubernetes? Maybe. LXC works well enough.
  • More remote sites (family members want in)
  • Better monitoring (Prometheus/Grafana stack)

Never:

  • Going back to Windows Server
  • Paying for VMware
  • Running without snapshots

The Philosophy

A homelab is a sandbox. It's where you break things safely. It's where you learn by doing.

Twelve years ago, I just wanted to download things faster. Now I'm running a custom Linux distribution across multiple sites with distributed compilation and mesh networking.

The path wasn't planned. Each problem led to a solution that created new problems. That's the fun.

If you're starting out: begin small. A Raspberry Pi. A used OptiPlex. One external drive. You don't need massive storage arrays on day one.

The lab grows with you.


This post is part of the infrastructure series. See also: Building Argo OS, The Build Swarm, and The Tailscale Revolution.