Skip to main content
Playgrounds

Playgrounds Overview

Architecture and component breakdown of the Arcturus-Prime interactive playground system -- lab engine provisioning, session management, challenge tracking, and dual-node failover

February 23, 2026

Playgrounds Overview

The playground system lives at /playground and provides 11 interactive labs spanning container provisioning, VM-based graphical environments, AI exercises, and client-side simulations. Every playground page is statically generated at build time (output: 'static' in the Astro config) — the HTML shells are pure SSG, and all interactivity (lab launching, terminal connections, challenge progress) happens client-side.

Hub Page

/playground/index.astro renders the hub. It defines every lab as a data object with metadata: id, name, description, difficulty, time, tags, color, href, mode, and skills. The mode field determines the lab type:

  • live — Provisions real LXC containers or QEMU VMs via the lab engine (containers, terminal, networking, iac, argo-os, ollama, rag).
  • live-data — Reads real APIs but does not provision infrastructure (infrastructure, monitoring).
  • simulation — Fully client-side, no backend dependency (build-swarm, apkg-tutorial).

The hub displays stats at the top (total cores, storage, networks, drones, container count) and renders each lab as a card with difficulty badge, estimated time, and feature list.

Lab Architecture

The provisioning pipeline for live labs follows this flow:

LabLauncher → health check → user clicks Launch
  → POST /api/labs/create (lab engine at 10.42.0.210:8094)
  → lab engine provisions LXC container (or QEMU VM for argo-os)
  → poll GET /api/labs/{sessionId} until status = running
  → dispatch 'lab-ready' CustomEvent
  → TerminalEmbed opens WebSocket to /ws/terminal/{sessionId}
  → (for argo-os: VNCEmbed opens WebSocket to /ws/vnc/{sessionId})

LabLauncher Component

src/components/labs/LabLauncher.astro manages the full launch lifecycle. It accepts a templateId prop and cycles through five visual states: connecting, idle, provisioning, error, and none (hidden). On mount, it checks localStorage for an existing session and attempts reconnection. Otherwise it runs a health check against /api/labs/health with up to 2 retries.

Provisioning steps display as animated indicators. For the argo-os-experience template (QEMU), the steps update to reflect VM cloning and desktop boot, and the timeout extends from 30 to 90 seconds. If the lab engine is unreachable, the launcher offers a simulation fallback.

Lab Engine

The lab engine runs at 10.42.0.210:8094 on Proxmox Izar-Host (Milky Way site). In production, browser requests go through labs.Arcturus-Prime.com via Cloudflare Tunnel. In local development, Vite proxies /api/labs/* and /ws/* to the engine directly. All requests are HMAC-SHA256 signed using the PUBLIC_LAB_API_SECRET environment variable — the engine rejects unsigned or expired requests.

The lab engine creates containers on the isolated lab network vmbr99 (10.99.0.1/24), which is firewalled from the production network. Containers are ephemeral and auto-destroy after 60 minutes (configurable via the session extend endpoint).

Session Management

src/components/labs/SessionBar.astro renders a fixed bottom bar once a lab session is active. It displays session ID, countdown timer, CPU/memory bars, a +15m extend button, and an End Lab button. The controller activates on the lab-ready event, polls resource stats every 10 seconds, and color-codes the timer (green > yellow under 10 min > red-blink under 5 min). At zero the session auto-destroys. A beforeunload listener also destroys the lab if the user closes the tab.

Sessions persist in localStorage under argobox_lab_sessions as StoredSession objects. Returning to a lab page with an active session triggers automatic reconnection.

Challenge Tracking

src/components/labs/ChallengeTracker.astro provides a sidebar panel of guided exercises. Each challenge group contains tasks with optional expandable details: step-by-step instructions, a copyable command, hints, and expected output. Progress persists in localStorage keyed by Arcturus-Prime-challenges-{templateId}. The tracker supports difficulty tiers (beginner, intermediate, advanced, expert) with color-coded badges, tier filtering, and an overall progress bar.

Health and Failover API Endpoints

Four SSR API routes handle playground infrastructure management. These are server-rendered (not static) and require admin authentication.

EndpointPurpose
GET /api/playground/healthFast-poll health proxy (2-second cadence), returns dual-node status from the playground switch service
GET/POST /api/playground/switchProxy to playground-switch.Arcturus-Prime.com for active node switching
POST /api/playground/node-controlAdmin control for node operations (start, stop, failover)
GET /api/playground/statusAggregated playground status

Dual-Node Failover

The playground infrastructure spans two Proxmox nodes for resilience:

  • Primary: Proxmox Izar-Host at 10.42.0.2 (Milky Way site) — runs the lab engine at 10.42.0.210:8094
  • Secondary: Proxmox Tarn-Host at 192.168.20.100 (Andromeda site) — failover target

The playground switch service monitors both nodes and can redirect lab provisioning to the secondary node if the primary becomes unreachable. The /api/playground/switch endpoint proxies these operations server-side to avoid CORS issues.

Isolated Lab Network

All lab containers and VMs run on a dedicated Proxmox bridge network:

  • Bridge: vmbr99
  • Subnet: 10.99.0.1/24
  • Isolation: No routing to the production 10.42.0.0/24 or 192.168.20.0/24 networks
  • Firewall: Lab containers cannot reach the internet or other lab containers outside their session

This isolation ensures visitors cannot interact with production infrastructure regardless of what they run inside their lab containers.

playgroundslabsarchitectureproxmoxlab-engine