Command Center (status.Arcturus-Prime.com)
Live infrastructure dashboard powered by Flask, deployed on Altair-Link via Docker
Command Center
The Command Center is the live infrastructure dashboard at status.Arcturus-Prime.com. It pulls real-time data from the build swarm, monitoring stack, and infrastructure services, then presents it in a clean web interface that anyone can view. It’s the public face of the homelab’s operational status.
Stack
- Backend: Flask 3.0 (Python)
- Frontend: HTML, CSS, vanilla JavaScript
- Deployment: Docker container on Altair-Link
- Access: Cloudflare Tunnel at status.Arcturus-Prime.com
- Repository:
~/Development/Arcturus-Prime-command-center/
No React. No Next.js. No build step for the frontend. Flask serves Jinja2 templates with plain JavaScript for the interactive bits. This is a dashboard, not a SPA — it needs to load fast, display data, and not get in the way.
Infrastructure
Where It Runs
The Command Center runs as a Docker container on Altair-Link (10.42.0.199), exposed on port 8093. Altair-Link is the services host on the Milky Way — it runs the build swarm gateway, monitoring stack, and a handful of other Docker services.
The container is exposed through a Cloudflare Tunnel, which makes it publicly accessible at status.Arcturus-Prime.com without opening any ports on the router. Cloudflare handles TLS, DDoS protection, and caching. The tunnel connects from Altair-Link directly to Cloudflare’s edge network.
Environment Variables
The Command Center needs to know where to find the services it queries:
| Variable | Value | Purpose |
|---|---|---|
GATEWAY_URL | http://10.42.0.199:8090 | Build swarm gateway |
ORCHESTRATOR_URL | http://10.42.0.201:8080 | Build swarm orchestrator |
LOKI_URL | http://10.42.0.199:3100 | Loki log aggregation |
PROMETHEUS_URL | http://10.42.0.199:9090 | Prometheus metrics |
These are all internal Milky Way addresses. The Command Center container runs on the same Docker network as the gateway and monitoring stack on Altair-Link, so it has direct access to everything at 10.42.0.199. For the orchestrator at 10.42.0.201, it reaches across the LAN — same subnet, no Tailscale needed.
API Endpoints
The Command Center exposes its own API that aggregates data from multiple backend services. These endpoints power both the Command Center’s own frontend and the Arcturus-Prime website (Arcturus-Prime.com), which consumes them via proxy.
Public Endpoints
| Endpoint | Source | Description |
|---|---|---|
/api/v1/services/public/build-swarm | Gateway + Orchestrator | Current swarm status, drone states, version |
/api/v1/services/public/build-history | Orchestrator | Recent build completions with timestamps |
/api/v1/services/public/infrastructure | Prometheus + direct | Infrastructure overview — hosts, services, uptime |
/api/v1/services/public/queue | Orchestrator | Build queue depth, pending/building/completed counts |
All public endpoints return sanitized data. Hostnames, IPs, and internal identifiers pass through the Galactic Identity System before being included in responses. The Command Center handles this sanitization server-side in the Flask routes — it queries internal services with real names and translates before responding.
Response Example
{
"status": "ok",
"timestamp": "2026-02-23T14:30:00Z",
"data": {
"swarm": {
"version": "2.6.0",
"status": "active",
"orchestrator": "online",
"drones": {
"total": 4,
"online": 3,
"building": 2,
"offline": 1
}
},
"queue": {
"pending": 8,
"building": 3,
"completed": 142,
"failed": 1
}
}
}
How Arcturus-Prime Consumes Command Center Data
The Arcturus-Prime website at Arcturus-Prime.com doesn’t query the Command Center directly from the browser — that would require CORS headers and expose the internal service URL. Instead, Arcturus-Prime uses server-side proxy endpoints.
The Astro site defines /api/proxy/* routes that forward requests to the Command Center on the backend. When a visitor loads the infrastructure page on Arcturus-Prime.com, the page’s JavaScript calls /api/proxy/build-swarm, which the Astro server (or Cloudflare Worker in production) proxies to http://10.42.0.199:8093/api/v1/services/public/build-swarm.
This keeps the Command Center’s address internal while still delivering live data to the public website. The browser only ever talks to Arcturus-Prime.com.
Proxy Flow
Browser → Arcturus-Prime.com/api/proxy/build-swarm
→ Cloudflare Worker → 10.42.0.199:8093/api/v1/services/public/build-swarm
→ Command Center (Flask) → queries gateway/orchestrator
→ Sanitized response → back to browser
Frontend
The Command Center frontend is intentionally simple. Jinja2 templates render the initial page state server-side, and vanilla JavaScript handles live updates via polling (every 15 seconds) and optional WebSocket connections for real-time build progress.
Key Views
Dashboard: The landing page. Shows overall swarm status, active builds, drone health, and recent build history in a single view. Think of it as the “mission control” screen — everything important at a glance.
Build History: A paginated table of completed builds with package names, build times, drone assignments, and success/failure status. Filterable by drone, date range, and build status.
Infrastructure: Shows the infrastructure topology — which hosts are up, which services are running, disk usage, memory, and CPU metrics pulled from Prometheus.
Queue: Real-time view of the build queue. Shows pending packages, in-progress builds with estimated completion times, and recently completed packages. Updates live via WebSocket when connected.
No Framework, No Problem
The frontend doesn’t use React, Vue, Svelte, or any JavaScript framework. It’s HTML templates with CSS Grid/Flexbox for layout and vanilla JS for interactivity. The total JavaScript payload is under 30KB.
This is deliberate. The Command Center is an internal tool that grew a public face. It doesn’t need a virtual DOM, state management, or component lifecycle hooks. It needs to display numbers and update them periodically. Vanilla JS does that fine.
Deployment
The Deploy Script
Deployment is handled by ./deploy.sh in the repository root. It’s a straightforward rsync + Docker restart:
#!/bin/bash
# deploy.sh - Deploy Command Center to Altair-Link
# Sync files to Altair-Link
rsync -avz --delete \
--exclude '.git' \
--exclude '__pycache__' \
--exclude '.env' \
./ [email protected]:/root/Arcturus-Prime-command-center/
# Restart the Docker container
ssh [email protected] "cd /root/Arcturus-Prime-command-center && docker compose down && docker compose up -d"
echo "Command Center deployed."
That’s it. No CI/CD pipeline, no staging environment, no blue-green deployment. rsync the files, restart Docker. The container rebuilds from the local Dockerfile, picks up the new code, and starts serving.
Why rsync?
Because it works. The Command Center is a single-developer project deployed to a single host. A full CI/CD pipeline with GitHub Actions, container registries, and deployment manifests would be engineering theater for a project that deploys maybe twice a week. rsync gives you atomic file updates, SSH gives you the restart command, and the whole deploy takes under 10 seconds.
Docker Compose
The Docker Compose configuration on Altair-Link:
version: '3.8'
services:
command-center:
build: .
container_name: Arcturus-Prime-command-center
restart: unless-stopped
ports:
- "8093:5000"
environment:
- GATEWAY_URL=http://10.42.0.199:8090
- ORCHESTRATOR_URL=http://10.42.0.201:8080
- LOKI_URL=http://10.42.0.199:3100
- PROMETHEUS_URL=http://10.42.0.199:9090
networks:
- Arcturus-Prime-net
networks:
Arcturus-Prime-net:
external: true
Port 8093 on the host maps to Flask’s default port 5000 inside the container. The Arcturus-Prime-net Docker network is shared with the gateway and monitoring stack containers, giving the Command Center direct access to those services without going through the host network.
Development
Local Setup
cd ~/Development/Arcturus-Prime-command-center/
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Set environment variables
export GATEWAY_URL=http://10.42.0.199:8090
export ORCHESTRATOR_URL=http://10.42.0.201:8080
export LOKI_URL=http://10.42.0.199:3100
export PROMETHEUS_URL=http://10.42.0.199:9090
# Run Flask dev server
flask run --debug --port 8093
Running locally on Capella-Outpost (10.42.0.100) works fine since it’s on the same Milky Way as all the backend services. The Flask dev server with --debug gives you hot reload on code changes.
Testing Against Live Services
The Command Center queries live infrastructure services. There’s no mock layer or test fixtures — when you run it locally, you’re hitting the real gateway, real orchestrator, and real Prometheus. This means you need the actual services running to develop the Command Center, but it also means what you see locally is exactly what production will show.
If the orchestrator is down while you’re developing, the Command Center handles it gracefully — it shows “offline” status with the last known data timestamp rather than throwing errors. Every external call has a timeout and fallback.