Skip to main content
Admin Modules

Infrastructure Tools

Homelab service registry, build swarm fleet management, Proxmox console, server management, and terminal access in Arcturus-Prime admin

February 23, 2026

Infrastructure Tools

The Arcturus-Prime admin panel provides direct management interfaces for the underlying homelab infrastructure. These tools cover service discovery, build fleet orchestration, virtualization management, server administration, and remote terminal access. Each tool communicates with the physical infrastructure through dedicated API proxy endpoints.

Homelab (/admin/homelab)

The homelab page at /admin/homelab is the service registry and credential manager for every service running in the homelab. It provides a single pane of glass for discovering what is running, where it is running, and how to access it.

Service Registry

Services are organized by host. The primary hosts in the Arcturus-Prime homelab span two physical sites connected via Tailscale:

Milky Way Site (Local — 10.42.0.0/24)

Altair-Link (10.42.0.199) — the primary services host running Docker containers for the Arcturus-Prime ecosystem. Services include:

Proxmox Izar-Host (10.42.0.2) — primary Proxmox VE hypervisor running VMs and LXC containers:

  • Proxmox VE management interface (https://10.42.0.2:8006)
  • Build drone LXC — drone-Izar-Host (10.42.0.203, 16 cores, 11GB RAM)
  • Orchestrator LXC — orch-Izar-Host (10.42.0.201, port 8091)
  • Lab engine LXC (10.42.0.210, port 8094)
  • Sweeper LXC — sweeper-Capella (8 cores, 31GB RAM)

Bare Metal Tau-Host (10.42.0.194) — bare metal host running build drone:

  • Build drone LXC — drone-Tau-Host (10.42.0.175, 8 cores, 31GB RAM)

Capella-Outpost (10.42.0.100) — desktop workstation with GPU:

Andromeda Site (Remote — 192.168.20.0/24, via Tailscale)

Proxmox Tarn-Host (192.168.20.100, Tailscale 100.64.0.16.118) — secondary Proxmox hypervisor at the remote site:

  • Proxmox VE management interface (https://192.168.20.100:8006)
  • Orchestrator LXC — orch-Tarn-Host (CT 102, Tailscale 100.64.0.118)
  • Build drone LXC — drone-Tarn (CT 103, 14 cores, 12GB RAM, Tailscale 100.64.0.91)

Meridian-Host (192.168.20.50, Tailscale 100.64.0.15.30) — Unraid NAS and media server:

  • Unraid web UI (http://192.168.20.50)
  • Plex Media Server (http://192.168.20.50:32400)
  • Meridian-Host admin API (port 8888)
  • Build drone VM — drone-Meridian-Host (QEMU, 20 cores, 52GB RAM, Tailscale 100.64.0.110)
  • Syncthing, Nextcloud, backups storage

Credential Management

Each service entry in the registry can store associated credentials (username, password, API tokens, SSH keys). Credentials are encrypted at rest and decrypted only when displayed in the admin panel. The credential viewer requires a secondary authentication step (re-enter password) before revealing sensitive values. Credentials can be copied to clipboard with a single click and are masked by default.

Service Health

The homelab page polls each registered service at configurable intervals (default: 60 seconds) and displays a status indicator:

  • Green — service responded with HTTP 2xx or successful TCP connection
  • Yellow — service responded but with degraded performance (response time > 2 seconds)
  • Red — service did not respond within the timeout period
  • Gray — health checking is disabled for this service

Health history is retained for 24 hours and displayed as a sparkline next to each service entry.

Build Swarm (/admin/build-swarm, /admin/build)

The build swarm interface manages a distributed build fleet spanning 5 drones across 2 sites, totaling 66 cores and 140GB RAM.

Fleet Overview

DroneHostIP / TailscaleCoresRAMType
drone-Izar-HostProxmox Izar-Host10.42.0.2031611GBLXC
drone-Tau-HostBare Metal10.42.0.175831GBLXC
sweeper-CapellaProxmox Izar-Host831GBLXC
drone-TarnProxmox Tarn-Host100.64.0.911412GBLXC
drone-Meridian-HostMeridian-Host100.64.0.1102052GBQEMU VM

Fleet Management (/admin/build-swarm)

The fleet management view shows all registered build drones with their current status:

  • Drone name — identifier matching the fleet table
  • Host — the hypervisor running the drone
  • Status — idle, building, error, or offline
  • Current job — if building, shows the package name and progress
  • Queue depth — number of packages waiting for this drone
  • Capabilities — architecture and USE flag support

Administrators can drain a drone (finish current job then stop accepting new ones), force-kill stuck builds, and view build history per drone.

Build Pipeline (/admin/build)

The build pipeline view manages individual build jobs:

  • Trigger build — manually trigger a build for any configured package
  • Build history — list of past builds with status, duration, and package info
  • Build logs — real-time streaming build output via SSE
  • Artifact browser — browse and download compiled binary packages from the gateway binhost

The build pipeline integrates with Gitea webhooks to automatically trigger builds on push to configured branches. Build status is reported back to Gitea as commit statuses.

API Routes

The swarm uses two generations of API:

VersionRouteBackendPurpose
v4/api/gatewayAltair-Link:8090Unified gateway — build submission, status, binhost
v4/api/commandAltair-Link:8093Command center — system status, management
v3/api/swarmAltair-Link:8090Original swarm API
v3/api/swarm-adminAltair-Link:8093Direct orchestrator admin

Proxmox Console (/admin/proxmox)

The Proxmox console at /admin/proxmox provides browser-based management of virtual machines and containers running on the Proxmox VE hypervisors. The primary target is Proxmox Izar-Host at 10.42.0.2.

VM/CT Browser

The left panel displays a tree view of all VMs and containers on the Proxmox cluster, organized by node. Each entry shows:

  • VMID — the numeric Proxmox identifier
  • Name — descriptive name
  • Type — VM (QEMU) or CT (LXC container)
  • Status — running, stopped, or paused
  • Resource usage — CPU percentage, memory usage, disk usage

Clicking an entry opens the detail panel with start/stop/restart controls, configuration viewer, and access to the console.

VNC Embed (noVNC)

For full virtual machines, the console provides a noVNC viewer embedded directly in the admin panel. The noVNC client connects to the Proxmox VNC proxy endpoint at wss://10.42.0.2:8006/api2/json/nodes/{node}/qemu/{vmid}/vncwebsocket, providing a full graphical console to the VM without leaving the browser.

The noVNC integration supports:

  • Full keyboard and mouse passthrough
  • Clipboard sync (copy/paste between local machine and VM)
  • Display scaling to fit the browser panel
  • Connection quality indicator

Terminal Embed (xterm.js)

For LXC containers, the console provides an xterm.js terminal emulator that connects to the container’s shell via the Proxmox terminal websocket API. This gives a native terminal experience in the browser with:

  • Full ANSI color support
  • Terminal resizing (adapts to panel dimensions)
  • Scrollback buffer (configurable, default 10,000 lines)
  • Search within terminal output
  • Copy/paste support

The terminal connects through the Proxmox API at https://10.42.0.2:8006/api2/json/nodes/{node}/lxc/{vmid}/vncproxy and establishes a websocket connection for real-time I/O.

Servers (/admin/servers)

The servers page at /admin/servers provides a high-level server management interface for the homelab hosts across both sites. Unlike the homelab page which focuses on services, the servers page focuses on the physical and virtual hardware.

Each server card displays:

  • System info — hostname, OS, kernel version, uptime
  • CPU — model, core count, current utilization graph
  • Memory — total, used, available, with historical chart
  • Storage — mount points, capacity, usage per volume
  • Network — interface list, IP addresses, current throughput
  • Temperature — CPU and drive temperatures (where available via IPMI/sensors)

Data is collected by lightweight monitoring agents on each host and reported through the /api/services endpoint. The refresh interval is 10 seconds for real-time monitoring.

API Proxies

Server management uses two dedicated API proxies:

  • /api/Tarn-Host-adminbox — proxies requests to the Proxmox Tarn-Host management API at 192.168.20.100 via Tailscale. Handles Proxmox API calls and container orchestration. Auth: TITAN_ADMINBOX_TOKEN injected as Authorization: Bearer.
  • /api/mm-Arcturus-Prime — proxies requests to the Meridian-Host admin API at 192.168.20.50 via Tailscale (port 8888). Handles Unraid API calls, disk pool management, and Docker container management. Auth: MM_ARGOBOX_TOKEN injected as Authorization: Bearer.

Both proxies inject authentication headers server-side so tokens never reach the browser.

MM Terminal (/admin/mm-terminal)

The MM Terminal at /admin/mm-terminal provides direct terminal access to the Meridian-Host Unraid server at 192.168.20.50 (Tailscale 100.64.0.15.30). This is an xterm.js terminal emulator that connects via websocket to a shell session on the Unraid host.

Use cases for the MM Terminal:

  • Managing Docker containers directly (docker ps, docker logs, docker restart)
  • Checking disk health and array status (mdcmd status, smartctl)
  • Monitoring file transfers and backup operations
  • Managing the drone-Meridian-Host build VM
  • File operations on the storage array

The terminal session authenticates using stored SSH credentials from the homelab service registry. Sessions have a configurable idle timeout (default: 30 minutes) after which the websocket connection is closed.

Probe Studio (/admin/probe-studio)

The probe studio at /admin/probe-studio manages batch probe operations across the Arcturus-Prime infrastructure. Probes are automated checks that verify the health and correctness of services, content, and configurations.

Probe Types

  • HTTP probes — check that URLs return expected status codes and response bodies
  • DNS probes — verify DNS records resolve correctly for all Arcturus-Prime domains
  • SSL probes — check certificate validity, expiry dates, and chain completeness
  • Content probes — scan content files for broken links, missing images, and frontmatter issues
  • Service probes — verify that required services are running and responding on expected ports

Batch Management

The probe studio allows creating probe sessions that run multiple probes in sequence or parallel. A session defines:

  • Which probes to run
  • Execution order and parallelism settings
  • Success/failure thresholds
  • Notification targets (who gets alerted on failures)

Sessions can be triggered manually, scheduled on a cron basis, or triggered by events (deploy completion, content update). Results are stored and viewable in the probe studio with trend analysis showing probe health over time.

API: /api/gateway

The gateway API at /api/gateway serves as the primary proxy for infrastructure probe requests. It routes requests to internal services, adds authentication, handles timeouts, and normalizes response formats. All probe traffic flows through this gateway to maintain a single egress point with consistent logging and rate limiting.

infrastructurehomelabproxmoxserversbuild-swarmterminalnoVNCxterm