12 Years of Homelab Evolution: From Seedbox to 252TB

12 Years of Homelab Evolution: From Seedbox to 252TB

I’ve been running a homelab for over 12 years now.

What started as a Raspberry Pi running seedbox scripts has evolved into a multi-site infrastructure with 252TB of storage, distributed compilation clusters, and a custom Linux distribution. Along the way, I’ve broken things spectacularly, learned painful lessons, and accidentally built something that actually works.

This is the timeline.


The Eras

EraYearsHypervisorStorageNetworkDefining Moment
Seedbox2012-2014NoneUSB drivesFlat”It works!”
ESXi2016-2019VMwareiSCSIVLANs”Why is licensing so expensive?”
Proxmox2019-2023Proxmox VEZFS/BtrfsTailscale”This is actually free?”
Argo OS2023-NowProxmox + Bare MetalDistributedMesh”I should build my own distro.”

Era 1: The Seedbox Days (2012-2014)

The Setup:

  • Raspberry Pi Model B (256MB RAM)
  • USB 2.0 external drive
  • rtorrent + rutorrent
  • OpenVPN to access from work

What I learned: Linux can run on anything. SSH is magic. Automation beats manual work.

The Pi sat in a corner of my apartment, silently downloading. It cost $35 and used 2.5 watts. I could access it from anywhere via a clunky OpenVPN tunnel that dropped every 20 minutes.

It was terrible by modern standards. It was also the start of everything.

The failure that ended this era: The USB drive failed. No backups. Lost everything. The Pi itself was fine—it just had nothing to serve.

Lesson: Storage is not optional.


Era 2: The ESXi Years (2016-2019)

I got serious. Bought actual server hardware.

The Hardware:

  • Dell PowerEdge R710 (eBay, $400)
  • 32GB ECC RAM
  • iSCSI SAN (Synology DS1815+)
  • 8x 4TB drives in RAID 6

The Stack:

  • VMware ESXi 6.5
  • Windows Server 2016 (AD, DNS, DHCP)
  • Ubuntu VMs for everything else
  • VLANs for isolation (management, storage, DMZ)

This was “proper” infrastructure. I learned Active Directory, group policies, proper networking with VLANs, SAN protocols, and enterprise monitoring. I felt like a real sysadmin.

The costs:

  • vCenter license: $0 (evaluation mode, technically)
  • Time spent fighting vSphere: Infinite
  • Electricity bill: +$50/month (that R710 was hungry)

What I learned: Enterprise software is powerful and complex. VLANs are essential for security. SAN storage is rock solid but expensive. Windows Server exists (I don’t miss it).

The failure that ended this era: VMware announced changes to their free tier. The writing was on the wall—ESXi for homelabs was going to get harder. Plus, I was tired of working around licensing restrictions.

Also, the R710’s fans sounded like a jet engine. My wife threatened divorce.


Era 3: The Proxmox Migration (2019-2023)

The Revelation: Everything ESXi did, Proxmox did for free. With better Linux support. And no licensing anxiety.

The Migration:

Week 1: Set up Proxmox test node
Week 2: Migrated non-critical VMs
Week 3: Migrated critical VMs
Week 4: Decommissioned ESXi
Week 5: Sold the R710 on eBay

The New Hardware:

  • Dell OptiPlex 7050 SFF (silent, 35W idle)
  • 64GB DDR4 RAM
  • NVMe boot + SATA SSD storage
  • Same Synology NAS (kept the investment)

The Stack:

  • Proxmox VE 7.x → 8.x
  • LXC containers (lighter than VMs)
  • ZFS for local storage (snapshots!)
  • Tailscale for networking (goodbye OpenVPN)

The Game Changer: LXC Containers

VMs are heavy. Each one needs its own kernel, its own memory reservation, its own disk image.

LXC containers share the host kernel. They’re lighter, faster, and use less resources. For Linux workloads, they’re almost always the right choice.

WorkloadVMLXCWinner
File server2GB RAM256MB RAMLXC
Docker host4GB RAM1GB RAMLXC
WindowsRequiredN/AVM
Untrusted codeSaferRiskyVM

I went from running 8 VMs to running 3 VMs + 15 LXC containers. Same hardware. More services.

What I learned: Open source wins. Containers beat VMs for most workloads. ZFS snapshots are life-saving. Tailscale is magic.

The failure that almost killed me: A Proxmox upgrade broke ZFS. 16 hours of recovery. I now test upgrades in a VM first.


Era 4: The Argo OS Era (2023-Present)

At some point, “homelab” became “distributed infrastructure.”

The Current Inventory:

Hypervisors

HostHardwareRoleLocation
Arcturus-PrimeDell OptiPlex 7050Primary ProxmoxLocal
Altair-LinkDell OptiPlex 7040Secondary ProxmoxLocal
Tarn-HostHP ProDesk 600Remote ProxmoxRemote site

Storage

DeviceCapacityTypeRole
Synology DS1821+64TB rawNASLocal backup, media
Unraid (Meridian-Mako-Silo)120TB rawNASRemote archive, Plex
Synology DS920+32TB rawNASRemote backup
Various NVMe~6TBDirectVM storage

Total usable storage: ~252TB

Compute

MachineCPURAMRole
Canopus-Outposti7-4790K32GBDaily driver (Argo OS)
Tau-Ceti-Labi7-477132GBBuild drone, testing
Various VMs-~128GB totalServices

Network

The mesh: Tailscale connects everything. Local is 10.42.0.0/24. Remote is 192.168.x.0/24. Tailscale makes them feel like one network.

Key services:

  • Subnet routing from LXC containers
  • Exit nodes for mobile devices
  • ACLs for access control
  • Auto-approvers for route advertisement

Why Tailscale? I spent 16 months fighting port forwarding, dynamic DNS, and VPN tunnels that dropped. Tailscale just works. 38ms latency to remote sites. No holes in my firewall.


The Storage Philosophy

After losing data twice (USB drive in 2014, Proxmox ZFS scare in 2021), I’m paranoid about storage.

The 3-2-1 Rule

  • 3 copies of important data
  • 2 different storage types
  • 1 offsite

Implementation:

  1. Primary: Local NAS (Synology, RAID/SHR)
  2. Secondary: Remote NAS (replicated overnight)
  3. Tertiary: Cloud (encrypted, rclone to Google Drive)

Filesystem Choices

Use CaseFilesystemWhy
Proxmox VM storageZFSSnapshots, compression, reliability
NAS arraysBtrfs/ext4 on SynologyIt’s what Synology uses
UnraidXFS + parityUnraid architecture
Desktop (Argo OS)BtrfsSnapshots for rollback

The Upgrade Cycle

I buy used enterprise drives. 8TB HGST drives are $40-60 on eBay. They have SMART data you can check, and they’ve survived a datacenter burn-in.

My rule: If a drive has >50,000 power-on hours, I only use it for non-critical data. If it has reallocated sectors, I don’t use it at all.


The Compilation Problem

This is where Argo OS was born.

I run Gentoo on my desktop. Gentoo compiles everything from source. A full system update can take 48 hours on an i7-4790K.

The solution: Don’t compile on the desktop. Compile on everything else.

The Build Swarm

DroneCoresLocationRole
drone-Izar16Local VMPrimary
drone-Tarn14Remote VMSecondary
dr-mm224Docker on UnraidHeavy lifter
Tau-Ceti-Lab8Bare metal Tau-Ceti-LabBackup

Total: 62 cores for parallel compilation.

How it works:

  1. Orchestrator receives package list
  2. Distributes packages to drones based on availability
  3. Drones compile and upload binary packages
  4. My desktop downloads binaries instead of compiling

Result: Firefox update went from 45 minutes (compile) to 30 seconds (binary install).


The Lessons (Chronological)

2012: Backups Are Not Optional

Lost a USB drive. Lost everything on it. Never again.

2014: Redundancy Means Something

RAID is not backup. But RAID protects against drive failure. Both are necessary.

2016: Enterprise Gear Is Loud

The R710 was powerful. It was also 80dB at idle. Your family will complain.

2017: VLANs Are Essential

Without network segmentation, one compromised device can access everything. VLANs fixed that.

2018: Licensing Is a Trap

Free tiers disappear. Evaluation modes expire. Open source is forever.

2019: Containers Beat VMs

For Linux workloads, LXC uses 1/4 the resources of a full VM. Docker on LXC uses even less.

2020: Tailscale Changes Everything

16 months of VPN pain erased in one afternoon. Mesh networking is the future.

2021: Test Your Backups

I had backups. I’d never tested restoring them. The Proxmox ZFS scare taught me: an untested backup is not a backup.

2022: Snapshots Are Undo Buttons

Btrfs snapshots on my desktop mean I can break anything and rollback in 2 minutes. This changes how aggressively I can experiment.

2023: Compile Once, Deploy Everywhere

Building a binhost means I never compile the same package twice. 62 cores working while I sleep.

2024: Your Distro Is Yours

Argo OS exists because I wanted exactly the system I have. Not Ubuntu’s choices. Not Arch’s choices. Mine.


The Current Architecture

┌─────────────────────────────────────────────────────────────┐
│                      TAILSCALE MESH                         │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  LOCAL (Milky Way)              REMOTE (Andromeda)          │
│  10.42.0.0/24                   192.168.x.0/24              │
│                                                             │
│  ┌─────────────────┐           ┌─────────────────┐          │
│  │ Arcturus-Prime  │           │ Tarn-Host    │          │
│  │ Proxmox         │           │ Proxmox         │          │
│  │ 15 containers   │           │ 8 containers    │          │
│  └────────┬────────┘           └────────┬────────┘          │
│           │                             │                   │
│  ┌────────┴────────┐           ┌────────┴────────┐          │
│  │ Rigel-Silo      │           │ Meridian-Mako-Silo     │          │
│  │ Synology 64TB   │           │ Unraid 120TB    │          │
│  └─────────────────┘           └─────────────────┘          │
│                                                             │
│  ┌─────────────────┐           ┌─────────────────┐          │
│  │ Canopus-Outpost │           │ Cassiel-Silo       │          │
│  │ Argo OS Desktop │           │ Synology 32TB   │          │
│  └─────────────────┘           └─────────────────┘          │
│                                                             │
│  ┌─────────────────┐                                        │
│  │ Tau-Ceti-Lab    │                                        │
│  │ Build drone     │                                        │
│  └─────────────────┘                                        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

The Cost Analysis

Initial Investment (2016-2019 ESXi era):

  • R710: $400
  • RAM upgrades: $150
  • Drives: $800
  • Synology: $800
  • Total: ~$2,150

Current Investment (2019-present):

  • OptiPlex 7050: $200 (used)
  • OptiPlex 7040: $150 (used)
  • Various RAM: $300
  • Drives (accumulated): ~$2,000
  • Synology additions: $500
  • Unraid license: $130
  • Remote Synology: $600
  • Total: ~$3,880

Running costs:

  • Electricity: ~$30/month (down from $80 with the R710)
  • Tailscale: Free tier
  • Google Drive (backup): $10/month
  • Internet: Already paying for it

Total 12-year investment: ~$6,000 + ~$4,000 in electricity = $10,000

What I got:

  • 252TB of redundant storage
  • Distributed compilation cluster
  • Custom Linux distribution
  • Skills that got me promoted twice
  • A hobby that never gets boring

Worth it.


What’s Next?

Short term:

  • Finish Argo OS Part 5 (the apkg package manager)
  • Upgrade to 10GbE between local nodes
  • Add a dedicated GPU node for AI workloads

Long term:

  • Kubernetes? Maybe. LXC works well enough.
  • More remote sites (family members want in)
  • Better monitoring (Prometheus/Grafana stack)

Never:

  • Going back to Windows Server
  • Paying for VMware
  • Running without snapshots

The Philosophy

A homelab is a sandbox. It’s where you break things safely. It’s where you learn by doing.

Twelve years ago, I just wanted to download things faster. Now I’m running a custom Linux distribution across multiple sites with distributed compilation and mesh networking.

The path wasn’t planned. Each problem led to a solution that created new problems. That’s the fun.

If you’re starting out: begin small. A Raspberry Pi. A used OptiPlex. One external drive. You don’t need 252TB on day one.

The lab grows with you.


This post is part of the infrastructure series. See also: Building Argo OS, The Build Swarm, and Mastering Tailscale.