Every infrastructure empire starts with someone else’s script. Mine started with a seedbox in a closet circa 2011, and a domain name that somehow stuck around for over a decade.
argobox.com. Originally it just pointed at a box. A single machine running ruTorrent and Plex, doing exactly two things: downloading and streaming. The kind of setup you build in a Saturday afternoon and then spend years maintaining because it works just well enough that you never have a reason to tear it down.
Until you do.
The Seedbox Script Era (~2011-2018)
I want to be honest about this part: I didn’t build anything. I ran someone else’s scripts. They promised to handle everything, and for a while, they did.
seedboxfromscratch was the first one. Did what it said on the tin. One script, a fresh Ubuntu install, and twenty minutes later you had ruTorrent, a web UI, and the illusion that you understood what you were running. I didn’t. I just knew it worked.
Then Quickbox. More polished, better dashboard. The kind of setup that made you feel professional because it had a status page. I remember installing it and thinking “this is what a real server looks like” — which tells you everything about my understanding of servers at the time.
Swizzin came next. Different maintainers, similar concept. I switched because Quickbox felt stale and switching felt like progress. It was lateral movement dressed up as improvement.
Then Saltbox — the Ansible-based evolution. Infrastructure-as-code vibes. Felt like the future. You ran a playbook and it configured everything. Clean, reproducible, professional.
Here’s the thing about all four of these:
argobox (bare metal, managed by $SCRIPT_DU_JOUR)
|- ruTorrent
|- Plex Media Server
|- Sonarr/Radarr/whatever-arr
|- The dashboard that made it feel official
+- Config files I was afraid to touch
Each script was someone else’s opinion about how a media server should work. Each one centralized everything on one box, one way, one vision. And each time something broke — a dependency update, a configuration conflict, an upgrade path that didn’t exist — I was stuck. I didn’t understand what the script had done, so I couldn’t fix what the script had broken.
After Saltbox, I made a decision that changed everything: no more centralized solutions. No more trusting someone else’s automation to manage my infrastructure. If something was going to break, it was going to be something I built, and I was going to know how to fix it.
That decision cost me years of comfortable ignorance. Worth it.
Phase 2: ESXi — Breaking Free (2018-ish)
VMware ESXi was my first taste of doing it myself. Virtualization meant I could experiment without risking the whole system. Break a VM, snapshot-restore, keep going. The safety net I never had with bare metal scripts.
ESXi Host
|- VM: Media Server (Plex, ruTorrent - MY config, not a script's)
|- VM: File Server
|- VM: Whatever experiment I was running
+- VM: The thing that broke last week (restored from snapshot)
No more monolithic scripts controlling everything. Each service got its own VM. Break one, the others survive. I was still learning — the configurations were ugly and the networking was held together with hope — but I was learning my way.
This was the first time I opened a config file and actually understood what was in it. Not because I’d read someone’s wiki about their script. Because I’d written it. Because I’d gotten it wrong five times and fixed it five times and now I knew what every line did.
The realization that I could build it myself hit different than I expected. It wasn’t triumphant. It was more like: “Wait, that’s all it is? I spent years being afraid to touch config files and this is what’s in them?”
All those scripts were just automating the obvious. Once you understand the obvious, the scripts are dead weight.
Phase 3: Distributed — Nothing Centralized
One hypervisor became two. A NAS appeared. Then another location entirely — Dad’s house, 40 miles away. Services spread out based on what made sense, not what one script decided.
Milky Way (Local) Andromeda (Remote)
|- Arcturus-Prime |- Tarn-Host
|- Altair-Link |- Meridian-Mako-Silo
|- Rigel-Silo +- Cassiel-Silo
+- various VMs
Nothing centralized. That was the philosophy. No single point of failure. No one script managing everything. Each system configured individually, for its specific purpose. If one died, the others kept running. If one needed maintenance, the others didn’t care.
The domain was the thread. argobox.com pointed at things. VPNs connected locations. But each node was independent, sovereign, responsible for its own existence.
This was the exact opposite of the seedbox script philosophy. Instead of “one box does everything,” it became “many boxes, each doing one thing well.” Unix philosophy applied to infrastructure. And it worked — messily, with more VPN tunnels than I’d like to admit, with configs scattered across machines and documented nowhere — but it worked.
The cost was complexity. Every service in its own place meant every service needed its own maintenance window, its own backup strategy, its own monitoring. I was the monitoring. If something was down, I’d notice when I tried to use it. Professional? No. But I knew exactly what was running and why, which is more than I could say during the Quickbox era.
August 2023: The Inflection Point
August 2023 wasn’t the beginning. It was the moment I stopped running and started thinking.
110 conversations in a single month. Not about fixing things that were broken. About architecture. About where this was all going. Questions I’d been putting off for years because the infrastructure worked well enough:
- What should a pentesting lab look like? (The security curiosity had been growing)
- How do you properly isolate VPN traffic? (My VPN setup was… optimistic)
- GPU passthrough for VMs — is it worth the headache? (Spoiler: yes and yes)
- How does Obsidian work for knowledge management? (I’d been using text files. In 2023.)
- Security certifications — which ones matter for someone who builds infrastructure?
110 conversations. The questions went from tactical (“how do I configure this”) to strategic (“how should this be architected”). That shift matters. It’s the difference between maintaining infrastructure and building it.
Something had changed. Not the hardware — that had been evolving for years. My relationship with it changed. I stopped seeing the homelab as a collection of things and started seeing it as a system. A system I could design, not just react to.
That’s when I started writing things down. For the first time in twelve years of running argobox.com, I documented what I had and what I was building. Not for a blog audience. For myself. For the inevitable 2 AM debugging session when I’d need to remember why I made a specific decision.
Phase 4: Unified (Late 2023 Onward)
The question changed from “how do I run stuff” to “how do I make it all work together.”
The distributed phase taught me independence. Each node standalone, each service self-contained. But standalone nodes with no coordination is just chaos with good branding. The unified phase was about coordination without centralization. Each node still independent, but now they could talk to each other, share resources, operate as one system while maintaining their individual sovereignty.
Tailscale mesh networking replaced my jangled mess of point-to-point VPNs. One command and a node joins the mesh. 38ms to Dad’s house, flat networking, like every machine is on the same LAN. Sixteen months of port-forwarding hell, gone.
Proxmox clusters replaced standalone ESXi hosts. Live migration, shared storage, high availability. The hypervisors became interchangeable rather than precious.
K3s brought container orchestration. Docker was the gateway drug; Kubernetes was the hard stuff. Declarative state, self-healing, rolling updates. My services stopped being things I managed and became things I declared.
GitOps meant configuration lived in repositories, not in my head. No more “I think I changed that setting on that machine last month.” The repo is the truth. Everything else is derived.
The build swarm — 66 cores of distributed compilation across multiple machines, building Gentoo packages in parallel. Because at some point I decided that not only did I want to understand every package on my system, I wanted to compile them all from source. Across six machines. Simultaneously.
And Gentoo itself. The final boss of Linux distributions. Not because it’s the best — because it forces you to understand everything. Every USE flag, every kernel option, every init script. There’s no “it just works” with Gentoo. There’s “I made it work and I know exactly how.”
What ArgoBox Means Now
The name stuck around for twelve years. The meaning didn’t.
2011: A box that hosts stuff, accessible via argobox.com. Singular noun. One machine, one purpose.
Now: An identity for a distributed infrastructure spanning multiple physical locations. Hypervisors, containers, and bare metal. Build infrastructure, media services, development environments. 66 cores of distributed Gentoo compilation. Custom OS builds. A mesh network that makes 40 miles feel like a crossover cable.
The domain is still the common denominator. But it represents something coherent now. Not a collection of things — a system with intention.
The Progression
seedboxfromscratch taught me what I wanted. Someone else’s script, but it showed me that self-hosting was possible and that I liked it.
Quickbox taught me that dashboards are cosmetic. A pretty status page doesn’t mean you understand your system.
Swizzin taught me that lateral moves aren’t progress. Switching scripts isn’t learning.
Saltbox taught me that even good automation is someone else’s opinion. Ansible playbooks are great — until you need something the playbook author didn’t anticipate.
ESXi taught me virtualization. Isolation matters. Snapshots save lives. And you can learn faster when mistakes aren’t permanent.
Going distributed taught me that centralization was the problem all along. Not because centralization is inherently bad — because centralization was hiding my lack of understanding behind someone else’s abstractions.
Going unified taught me that independence and coordination aren’t opposites. You can have sovereign nodes that work together. You can have a mesh without a single point of failure. You can have infrastructure-as-code without a monolithic script.
The Pattern
Homelabs don’t start with architecture diagrams and VLANs. They start with “I want to watch my movies” or “I want to download faster.” The sophistication comes later. After you’ve outgrown the scripts that got you started. After you’ve realized that someone else’s “perfect setup” isn’t yours. After you’ve decided that understanding your infrastructure matters more than convenience.
The progression isn’t failure. It’s education. Every script taught me something. Every broken config taught me more. And when I finally went independent, I knew why I was making every decision, because I’d already seen what happens when someone else makes them for you.
ArgoBox started as a seedbox managed by someone else’s scripts. It became a distributed system where nothing is centralized because I chose it that way — not because a playbook decided.
And in August 2023, after twelve years of building, I finally started writing it down. Because the infrastructure had outgrown my memory, and because somewhere between the seedbox and the build swarm, this stopped being a hobby and became the thing I’m actually good at.
Better late than never. But I wish I’d started sooner.