Sandbox & Playground
Unique-pages hub for content not in module nav, public demo provisioning with time-limited sessions, sandboxed AI workbench, and playground node failover control
Sandbox & Playground
Four pages cover experimental, demo, and infrastructure failover tools: the Sandbox index (/admin/sandbox), Demo Landing (/admin/sandbox/demo), Demo Workbench (/admin/sandbox/workbench), and Playground (/admin/playground). The sandbox section handles experimental page cataloging and public demo access. The playground provides Proxmox node switching and failover controls.
Sandbox Index (/admin/sandbox)
The sandbox index is a hub for pages not reachable through module navigation. If a page has a module that puts it in the admin sidebar, it does not belong in the sandbox. This keeps the sandbox clean and focused on discovery of experimental, orphaned, and generated content.
Design Philosophy (Updated 2026-02-27)
The sandbox was restructured to follow a strict rule: unique pages only. Previously it contained 10 sections with 37+ links, many duplicating pages already available through their module’s nav. Now it contains only 6 focused sections:
- Demo & Showcase — sandbox-owned demo landing and workbench pages
- Command Center Experiments —
/command/*routes (no module claims these) - Playground Routes —
/playground/*routes (no module claims these) - Ansible Automation —
/ansible/*routes (no module claims these) - Legacy & Orphaned Pages — quick-access links to legacy pages now claimed by modules (
/admin/build,/admin/swarm,/admin/site-test) - OpenClaw Generated — placeholder section for pages generated by OpenClaw agents. Currently empty; the
openclawPagesarray in the page source can be populated dynamically in the future
What Was Removed
Five sections were removed because their pages are reachable via module sidebar nav:
- “Recovered Admin Routes” (servers, workbench, proxmox — all have modules)
- “Argonaut Sub-Pages” (owned by argonaut module)
- “Content & Editing” (owned by content module)
- “Proxmox & Infrastructure” (owned by proxmox module)
- “Meridian-Host / Unraid” (mm-terminal owned by homelab module)
Demo Landing (/admin/sandbox/demo)
The demo landing page is the public-facing entry point for demonstration access to Arcturus-Prime. Unlike every other admin page, the demo landing does not require authentication — it is accessible to unauthenticated visitors.
Access Code Form
The page presents a simple form with a single field: an access code input. The form validates the code via POST /api/demo/session and on success sets an HttpOnly session cookie (__argobox_demo).
Session Provisioning
On successful code validation, the server provisions a time-limited demo session:
- An in-memory session is created with a 30-minute TTL
- An HttpOnly cookie is set (
SameSite=Lax,Securein production,Path=/) - The visitor is redirected to
/demo/admin(the full admin mirror)
The 30-minute timer starts at session creation, not at first interaction. Sessions are rate-limited (DEMO_RATE_LIMIT, default 50 requests) and capped at DEMO_MAX_SESSIONS concurrent sessions (default 5).
Demo Admin Mirror (Updated 2026-03-01)
After session creation, the visitor lands at /demo/admin/[...slug] which is a mirror-first page:
- An iframe loads the real admin page at
/admin/{slug}?demo_mirror=1&demo_embed=1 - The middleware data interceptor (
src/lib/demo-api.ts) returns synthetic data from 11 registered generators before real API handlers execute - All mutations (POST/PUT/PATCH/DELETE) are blocked with a 403 “view-only” message
- CosmicLayout strips chrome (header, footer, sidebar, background) for the embedded view
- A MutationObserver redacts any residual real IPs, emails, tokens, and URLs in the rendered DOM
- A sticky amber
DemoModeBannerindicates “DEMO MODE — Data is synthetic, Actions are view-only”
The visitor sees the real admin UI — same layout, same components, same sidebar — but with all data replaced by synthetic values. Route chip navigation above the iframe lets visitors switch between pages owned by the current module.
For the full architecture, see Demo Mode Tour Security.
Demo Workbench (/admin/sandbox/workbench)
The demo workbench is a sandboxed AI chat interface provided to demo users. It exposes a subset of Argonaut’s capabilities within strict safety boundaries.
Authentication
The workbench authenticates via URL token parameter. The token is passed as ?token={session_token} in the URL, extracted on page load, and sent with every API request as Authorization: Demo {token}. This is distinct from the standard session-based authentication used by admin and member users. The Demo authorization scheme signals the backend to apply demo-specific rate limits, model restrictions, and content filters.
Chat Interface
The workbench renders a chat interface similar to the Argonaut chat page but with reduced functionality:
- Model — fixed to the configured demo model (typically a free-tier model via OpenRouter). No model selection dropdown.
- System prompt — fixed demo system prompt introducing Arcturus-Prime and its capabilities. No custom system prompt support.
- RAG — enabled in safe mode only. Demo users can ask about Arcturus-Prime and get contextual answers from the knowledge base.
- History — conversation history is session-scoped. When the session expires, the history is deleted. No persistent conversation storage.
- Voice — voice input is disabled for demo sessions.
Countdown Timer
A countdown timer is prominently displayed in the workbench header showing the remaining session time in MM:SS format. The timer turns yellow at 5 minutes remaining and red at 1 minute remaining. At expiration, the workbench displays a “Session Expired” overlay with a link back to the demo landing page and an option to request a new access code.
Task Dispatch
The demo workbench supports a limited task dispatch capability. When the AI generates a command or script as part of its response, a “Run in Sandbox” button appears next to the code block. Clicking this button dispatches the command to a tmux session running on the demo sandbox server. The tmux session provides an isolated execution environment with:
- Restricted filesystem access (read-only to demo-specific directories)
- No network access beyond localhost
- Resource limits (CPU time, memory)
- Automatic cleanup on session expiration
Output Polling
After dispatching a task, the workbench polls for output from the tmux session. Output is streamed back into the chat interface as a system message showing the command’s stdout and stderr. Polling runs at 1-second intervals while a task is active and stops when the command completes or times out (60-second maximum execution time).
Playground (/admin/playground)
The playground page at /admin/playground is an infrastructure administration tool for managing Proxmox node switching and failover between the two hypervisors.
Node Switching
The primary function is switching the active Proxmox node that backs various Arcturus-Prime services. Two nodes are available:
- Proxmox Izar-Host — primary node at 10.42.0.2 on the Milky Way local network. Lower latency, higher availability for local services.
- Proxmox Tarn-Host — secondary node at 192.168.20.100 on the Andromeda remote network, accessed via Tailscale. Higher latency (~35-45ms) but provides geographic redundancy.
The node switcher shows both nodes side by side with their current status, resource utilization, running VM/CT count, and network latency. A toggle control switches the active node for services that support failover (build drones, pentest VMs, development containers). Switching the active node updates the API proxy routes so subsequent requests are routed to the selected hypervisor.
Failover Control
The failover section manages automatic and manual failover behavior:
- Auto-failover — when enabled, the system automatically switches to the secondary node if the primary becomes unreachable. Health checks run every 30 seconds. Three consecutive failures trigger failover. A notification is sent when failover activates.
- Manual failover — a button to immediately fail over to the secondary node. Used for planned maintenance on the primary. A confirmation dialog warns that in-progress operations on the primary may be interrupted.
- Failback — after a failover event, the system does not automatically fail back. A manual failback button is provided to return to the primary node once it is healthy again. This prevents flapping between nodes during intermittent connectivity issues.
Health Monitoring
The health monitoring panel shows real-time status for both nodes:
| Metric | Proxmox Izar-Host (10.42.0.2) | Proxmox Tarn-Host (192.168.20.100) |
|---|---|---|
| Status | Online/Offline | Online/Offline |
| CPU | Utilization % | Utilization % |
| Memory | Used / Total | Used / Total |
| Storage | Used / Total per pool | Used / Total per pool |
| VMs Running | Count | Count |
| CTs Running | Count | Count |
| Latency | < 1ms (local) | 35-45ms (Tailscale) |
| Uptime | Days/hours | Days/hours |
Health data is polled every 10 seconds. The Proxmox Izar-Host data is fetched directly via the local network. The Proxmox Tarn-Host data is fetched through the /api/Tarn-Host-adminbox proxy which routes through Tailscale to 192.168.20.100. Both nodes expose the standard Proxmox API at port 8006.
Node History
A history panel logs all node switching events: timestamp, trigger (manual or auto-failover), source node, destination node, and the user who initiated the switch (or “system” for auto-failover). This provides an audit trail for infrastructure changes and helps diagnose recurring failover events.