Skip to main content
build swarm

Build Drones

The 62-core distributed build fleet powering Argo OS binary packages

February 23, 2026

Build Drones

The build swarm’s muscle is its fleet of four drones — 62 cores spread across two networks, three hypervisors, and one bare metal box. Every drone runs Gentoo with binary package support, and they coordinate over Tailscale to build packages for Argo OS without the driver workstation ever touching a compiler.

Fleet Overview

DroneCoresHost TypeLocal IPTailscale IPLocation
drone-Izar-Host16Gentoo VM on Izar-Host10.42.0.203100.64.0.101.126Milky Way / Proxmox Izar-Host
Tau-Host8Gentoo bare metal10.42.0.194100.64.0.64.125Milky Way / Bare metal
drone-Tarn14LXC on Tarn-Host192.168.20.196100.64.0.27.91Andromeda / Proxmox Tarn-Host
dr-Meridian-Host24Docker on Meridian-Host100.64.0.57.110Andromeda / Unraid Meridian-Host

Total fleet capacity: 62 cores across 4 drones on 2 networks.

The split matters. drone-Izar-Host and Tau-Host sit on the Milky Way (10.42.0.x), close to the orchestrator and gateway with sub-millisecond latency. drone-Tarn and dr-Meridian-Host live on the Andromeda (192.168.20.x) at a remote site, connected over Tailscale with ~38ms round trip. The orchestrator accounts for this latency when distributing work — local drones get priority for small, fast builds while remote drones handle the heavyweight compilations where transfer time is negligible compared to compile time.

Drone Profiles

drone-Izar-Host (16 cores)

The workhorse. drone-Izar-Host runs as a Gentoo VM on the Izar-Host Proxmox hypervisor at 10.42.0.203. Sixteen cores makes it the second-largest drone in the fleet, and its position on the Milky Way means it has direct LAN access to the orchestrator (10.42.0.201) and the binhost.

drone-Izar-Host handles the bulk of standard package builds. It’s allocated enough resources that most packages compile in minutes, and the VM can be snapshotted on the Proxmox host if something goes sideways during a risky world update.

Key details:

  • Hypervisor: Proxmox on Izar-Host
  • Network: Milky Way (10.42.0.x), direct LAN to orchestrator
  • Tailscale: 100.64.0.101.126 (fallback path)
  • Primary role: General-purpose package building
  • Binary packages land on the binhost at 10.42.0.194

Tau-Host (8 cores)

The original. Tau-Host is bare metal Gentoo — no hypervisor, no container, just a machine running Gentoo with 8 cores. It lives at 10.42.0.194 on the Milky Way, and it doubles as the binhost server. When drones finish building packages, the binaries end up here.

Eight cores is the smallest allocation in the fleet, but Tau-Host punches above its weight because there’s no virtualization overhead. Bare metal compile times are consistently 10-15% faster per core than the VM-based drones. It also serves as the fallback build target when the larger drones are busy — the orchestrator will route single-package builds to Tau-Host rather than waiting for a slot on drone-Izar-Host.

Key details:

  • Host: Bare metal Gentoo
  • Network: Milky Way (10.42.0.x), also serves as binhost
  • Tailscale: 100.64.0.64.125 (fallback path)
  • Dual role: Build drone + binary package host
  • No virtualization overhead — fastest per-core performance

drone-Tarn (14 cores)

The remote heavy. drone-Tarn runs as an LXC container on the Tarn-Host Proxmox hypervisor at 192.168.20.196 on the Andromeda. Fourteen cores gives it solid throughput for medium-to-large packages, and LXC containers have near-native performance since there’s no hardware emulation layer.

Being on the Andromeda means drone-Tarn communicates with the orchestrator exclusively over Tailscale (100.64.0.27.91). The ~38ms latency is irrelevant for build operations — a package that takes 20 minutes to compile doesn’t care about an extra 38ms on the job assignment. Where it matters is binary package transfer back to the Milky Way binhost, which the swarm handles by batching completed packages and syncing them in bulk rather than one-at-a-time.

Key details:

  • Host: LXC container on Proxmox Tarn-Host
  • Network: Andromeda (192.168.20.x), Tailscale-only to orchestrator
  • Tailscale: 100.64.0.27.91
  • Near-native LXC performance, no hardware emulation
  • Bulk package sync back to Milky Way binhost

dr-Meridian-Host (24 cores)

The big gun. dr-Meridian-Host runs as a Docker container on the Meridian-Host Unraid server and brings 24 cores to the table — the single largest allocation in the fleet. It has no local network IP exposed to the swarm; all communication goes through Tailscale at 100.64.0.57.110.

Twenty-four cores means dr-Meridian-Host gets the heavy jobs: GCC bootstraps, kernel builds, Chromium, Firefox, anything that would take an hour on a smaller drone. The orchestrator’s queue logic routes packages with high estimated compile times to dr-Meridian-Host first, letting the smaller drones stay responsive for the quick builds.

Running on Docker instead of LXC or bare metal adds a thin abstraction layer, but the Unraid host has enough raw power that it doesn’t matter in practice. The Docker container is configured with Gentoo’s full toolchain, matching the same profile and USE flags as every other drone in the fleet.

Key details:

  • Host: Docker container on Unraid Meridian-Host
  • Network: Tailscale-only (100.64.0.57.110), no local IP in swarm
  • Largest drone: 24 cores, handles heavyweight compilations
  • Queue priority: Gets high-estimate-time packages first
  • Full Gentoo toolchain in Docker, matching fleet profile

Gentoo Configuration

Every drone in the fleet runs Gentoo with an identical Portage configuration. This is non-negotiable — if a package builds on drone-Izar-Host but fails on dr-Meridian-Host, the swarm has a consistency problem that breaks the entire binary package model.

Shared Configuration

All drones share:

  • Profile: Same Gentoo profile (desktop/plasma, typically)
  • USE flags: Synchronized across all drones via a managed make.conf
  • ACCEPT_KEYWORDS: Identical keyword settings (~amd64 where needed)
  • Package mask/unmask: Same package.mask and package.unmask across fleet
  • Binary package support: FEATURES="buildpkg" — every successful build produces a .tbz2 binary

The orchestrator enforces configuration consistency. When a drone registers, it reports its profile, key USE flags, and ACCEPT_KEYWORDS. If anything drifts from the reference configuration, the orchestrator flags the drone as misconfigured and pulls it from the build queue until it’s fixed.

Binary Package Flow

  1. Orchestrator assigns package to drone
  2. Drone runs emerge --buildpkg <package>
  3. Successful build produces binary in ${PKGDIR}
  4. Binary is synced to the binhost at 10.42.0.194
  5. Driver (Capella-Outpost at 10.42.0.100) installs with emerge --usepkg --usepkgonly

The driver machine (Capella-Outpost) never compiles. It pulls binaries exclusively. If a binary isn’t available, the install fails rather than falling back to source compilation. This is by design — if a package isn’t in the binhost, it means the swarm hasn’t built it yet, and you should submit a build request rather than compiling locally on your workstation.

Drone Registration

When a drone comes online, it follows a registration handshake with the orchestrator:

  1. Announce: Drone sends a registration payload to the orchestrator (10.42.0.201:8080) containing its hostname, core count, Tailscale IP, and configuration fingerprint
  2. Validate: Orchestrator checks the configuration fingerprint against the reference. Mismatches trigger a warning and the drone is placed in degraded state
  3. Accept: If validation passes, the drone enters the ready pool and starts receiving build jobs
  4. Heartbeat: Every 30 seconds, the drone sends a health check. Three missed heartbeats and the orchestrator marks the drone as offline and redistributes its queued jobs

Drones that go offline mid-build don’t lose work. The orchestrator tracks which packages were assigned and re-queues them when the drone drops out. The next available drone picks up the job. This is especially important for the Andromeda drones — if Meridian-Host goes down (which happens when someone at the remote site restarts the Unraid server), the builds get redistributed to the remaining fleet automatically.

Health Monitoring

The orchestrator maintains a real-time health dashboard for every drone:

  • Status: ready, building, degraded, offline
  • Current job: Which package is being built (if any)
  • Queue depth: How many packages are waiting for this drone
  • Core utilization: Reported by the drone’s heartbeat
  • Last heartbeat: Timestamp of most recent check-in
  • Build history: Success/failure rate over the last 24 hours

Health data is exposed through the orchestrator’s API at http://10.42.0.201:8080/api/drones and consumed by the Command Center dashboard at status.Arcturus-Prime.com for public-facing status (sanitized through the Galactic Identity System, of course).

Cross-Network Coordination

The two-network topology is the defining challenge of the build swarm. Milky Way and Andromeda are physically separate networks connected by Tailscale, and the swarm has to work seamlessly across both.

How Tailscale Bridges the Gap

Every drone and the orchestrator run Tailscale. The orchestrator at 10.42.0.201 has a Tailscale address, and the Andromeda drones (drone-Tarn at 100.64.0.27.91, dr-Meridian-Host at 100.64.0.57.110) are reachable only through their Tailscale IPs from the Milky Way side.

The gateway at 10.42.0.199 handles the routing logic. When a build request comes in, the gateway forwards it to the active orchestrator. The orchestrator then assigns packages to drones using whichever IP path works — local IP for Milky Way drones, Tailscale IP for Andromeda drones. The drone doesn’t care which path was used to reach it; it just builds the package and reports back.

Build Queue Distribution

The orchestrator distributes builds based on available cores and current load:

  1. Core-weighted assignment: A 24-core drone gets proportionally more packages than an 8-core drone
  2. Network-aware batching: Large builds go to Andromeda drones (where transfer latency is offset by compile time), small builds stay on Milky Way
  3. Dependency ordering: Packages with dependencies that are already built get priority — no drone should be blocked waiting for another drone’s output
  4. Failover redistribution: If a drone drops, its pending builds are redistributed within 90 seconds (3 missed heartbeats x 30 seconds)

The result is a system that keeps all 62 cores busy during a full world update. A typical @world rebuild with 200+ packages completes in a fraction of the time it would take on any single machine, and the driver workstation never has to wait for a compilation — just pull the binaries and go.

build-swarmdronesgentoodistributed-builds