The Lab Engine: Giving Every Visitor Their Own Linux Box
Portfolio sites are boring.
I know because I built one. Screenshots of dashboards. Bullet points about technologies. A resume reformatted into HTML with some CSS transitions. Recruiters scan it for 6 seconds, check for keywords, move on. Maybe 30 seconds if the design is interesting.
I stared at argobox.com for a while and thought: I have Proxmox clusters. I have 66 cores. I have the infrastructure sitting right there. What if instead of telling people about it, I justโฆ let them use it?
What if every visitor could get their own Linux box, right in their browser, running on the actual hardware?
So I built that.
The Pitch
Click a button. Get a real Linux terminal. Not a simulation. Not a fancy JavaScript terminal emulator pretending to be a shell. A real LXC container, on real hardware, with a real kernel, running on my Proxmox infrastructure.
You get 60 minutes. Then itโs gone. No trace. No persistent storage. Just a temporary playground that exists for exactly as long as you need it.
The whole thing is called the Lab Engine, and itโs a FastAPI backend that orchestrates ephemeral containers on demand.
How It Works
The architecture is straightforward if you squint hard enough.
Visitor clicks โLaunch Labโ on the site. The frontend fires a request to the Lab Engine API. Nothing fancy here โ just a POST with a template ID and some session metadata.
The Lab Engine talks to Proxmox. Using the proxmoxer library, it calls the Proxmox VE API on Izar-Orchestrator (10.42.0.201) and clones a template LXC container. Each template is a pre-built container image with specific tools and configurations already installed. The clone takes seconds.
The container boots. LXC containers donโt have the overhead of full VMs. Thereโs no BIOS, no bootloader, no kernel to load. They share the host kernel. Boot time is measured in seconds, not minutes.
The WebSocket terminal connects. This is where xterm.js comes in. The browser opens a WebSocket connection to the Lab Engine, which proxies it to the Proxmox serial console for that specific container. The result is a fully interactive terminal session โ resize events, ANSI colors, tab completion, everything โ running in a browser tab.
The session manager starts counting. Sixty minutes. Thatโs the hard limit. When the timer expires, the Lab Engine calls the Proxmox API to stop and destroy the container. No exceptions. No extensions. Gone.
Browser Lab Engine Proxmox (Izar-Orchestrator)
โ โ โ
โ POST /api/lab/launch โ โ
โโโโโโโโโโโโโโโโโโโโโโโโ>โ โ
โ โ Clone LXC template โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโ>โ
โ โ โ
โ โ Container ID + status โ
โ โ<โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ โ
โ WebSocket /ws/term โ โ
โโโโโโโโโโโโโโโโโโโโโโโโ>โ Serial console proxy โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโ>โ
โ โ โ
โ โ Real terminal I/O โ โ โ Real terminal I/O โ โ
โ โ โ
โ ... 60 minutes later ... โ
โ โ โ
โ โ Stop + Destroy container โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโ>โ
โ Connection closed โ โ
โ<โโโโโโโโโโโโโโโโโโโโโโโโ โ
The Templates
Five lab environments. Each one is a different LXC template with different tools pre-installed.
Linux Fundamentals. The starter lab. Basic CLI, file operations, permissions, pipes, redirection. Aimed at people whoโve never touched a terminal before or want to brush up. Itโs a clean Debian container with some guided exercises baked in. Nothing intimidating.
Container Workshop. Docker inside LXC. Nested containers. Yes, it works โ Proxmox supports nesting if you configure the LXC options correctly. Visitors can build images, run containers, mess with docker-compose. All inside an ephemeral container that disappears in an hour. The recursion is a little absurd and I love it.
Networking Lab. Multiple network interfaces, VLANs, routing tables, firewall rules. This one took the longest to set up because each container needs its own isolated network namespace with enough connectivity to be interesting but not enough to break anything. iptables, ip route, tcpdump โ all the tools are there.
IaC Playground. Terraform and Ansible pre-installed. Visitors can write Terraform configs and apply them against a sandboxed provider, or run Ansible playbooks against localhost. Itโs a taste of infrastructure-as-code without needing to set up any cloud accounts.
Argo OS Experience. This one is personal. Itโs a Gentoo container with Portage configured, emerge ready to go, and a curated set of packages available. Visitors can experience what running a source-based distribution feels like. Try emerge --info. Poke around /etc/portage. See why I spent years building a custom distro. The compile times in a 60-minute container areโฆ educational.
The Resource Problem
Hereโs the thing. The same Proxmox infrastructure that runs these labs also runs the build swarm. 66 cores across 5 machines, compiling Gentoo packages overnight. If a bunch of visitors spin up lab containers during a build cycle, those containers are stealing CPU and memory from active drone compilations.
Thatโs bad. A failed build at 3 AM because someone was running apt install cowsay in a demo container is not acceptable.
So the Lab Engine has โdrone awareness.โ It monitors build swarm load on Izar-Orchestrator and throttles lab capacity when builds are active. If the orchestrator reports drones are compiling, the Lab Engine reduces the maximum concurrent labs. If thereโs a heavy build running, it might refuse new launches entirely with a friendly โinfrastructure is busy, try again laterโ message.
This is the difference between running a demo on AWS where you just throw money at capacity, and running it on your own hardware where every core matters. I donโt get to autoscale. I get to be clever about scheduling.
Rate Limiting and Capacity
Beyond drone awareness, thereโs straightforward abuse prevention.
Max containers per IP. One active lab per IP address. You donโt need two simultaneous terminals, and if you do, youโre probably doing something I donโt want you doing.
Max total containers. Hard cap on concurrent labs across all visitors. The exact number depends on available resources, but itโs conservative. Iโd rather tell someone โtry again in a few minutesโ than have 20 containers fighting for CPU and everyone getting a terrible experience.
Cooldown between launches. You canโt rapid-fire container creation. Thereโs a delay between when your lab expires (or you explicitly close it) and when you can launch a new one. This prevents the โspin up, break it, spin up anotherโ loop from consuming all available container IDs.
The session manager tracks all of this. Every active lab has an entry with its container ID, IP address, template type, creation timestamp, and expiry time. Itโs a simple in-memory store backed by periodic state dumps โ nothing fancy, because the data is inherently ephemeral anyway.
The WebSocket Terminal
This was the hardest part. By far.
The concept is simple: pipe terminal I/O between the browser and the container. xterm.js handles rendering in the browser. The Lab Engine handles the WebSocket server. Proxmox provides the serial console.
The reality is full of edge cases.
Terminal resize. When you resize your browser window, xterm.js sends new column/row dimensions. Those need to propagate through the WebSocket, through the Lab Engine, into the Proxmox console session, and into the containerโs TTY. If any link in that chain doesnโt handle the resize event, you get text wrapping at the wrong column. Or commands that render incorrectly. Or vim that thinks your terminal is 80x24 when itโs actually 200x50.
Getting resize handling right took more debugging than the entire Proxmox integration. The number of times I stared at a terminal that was almost right but had text overflowing by one columnโฆ
Connection lifecycle. WebSocket connections drop. Browsers close. Networks hiccup. The Lab Engine needs to handle all of this gracefully โ cleaning up sessions when the WebSocket dies, but not immediately destroying the container (in case the visitor refreshes). Thereโs a grace period. If they reconnect within a few minutes, they get their same container back.
The encoding. Terminal output is bytes. WebSocket messages can be text or binary. xterm.js expects UTF-8 strings. Proxmox serial consoles output raw bytes. Getting the encoding translation right, especially with programs that emit ANSI escape sequences or non-ASCII characters, required more encode()/decode() debugging than Iโd like to admit.
But when it works? It works beautifully. You open a browser tab, click a button, and 10 seconds later youโre in a Linux terminal. No SSH keys. No client software. No VPN. Just a browser.
Security
Giving strangers shell access to machines on my network requires some paranoia.
Isolation. Each lab is a separate LXC container. They canโt see each other. They canโt see the host. They canโt see production infrastructure. The network is segmented โ lab containers live on their own bridge with no routes to anything interesting.
Resource limits. Every container gets a fixed allocation of CPU cores, RAM, and disk. You canโt fork-bomb your way into affecting other containers or the host. cgroups handle the enforcement. If you hit the limit, your processes get throttled or killed. Not my problem.
Ephemeral by design. Nothing persists. When the 60-minute timer fires, the container is stopped and destroyed. The backing storage is wiped. Thereโs no โoops, we left a container runningโ scenario because the session manager enforces the TTL with prejudice.
No outbound network. Lab containers canโt reach the internet. They can reach their own localhost and thatโs about it. This prevents them from being used as relay nodes, crypto miners, or anything else creative. You get a sandbox. Thatโs it.
The Deployment
The Lab Engine itself runs as a Docker container on Altair-Link (10.42.0.199). It connects to the Proxmox API on Izar-Orchestrator (10.42.0.201) using a dedicated API token with limited permissions โ it can create and destroy containers in a specific pool, and nothing else.
External access comes through the existing Traefik reverse proxy setup, through a Cloudflare tunnel, with Zero Trust rules controlling who can hit the API. The public-facing lab launcher on argobox.com talks to a Cloudflare-protected endpoint. The actual Proxmox API is never exposed.
Visitor โ Cloudflare Tunnel โ Traefik โ Lab Engine (Altair-Link)
โ
โผ
Proxmox API (Izar-Orchestrator)
โ
โผ
LXC Container (ephemeral)
Itโs the same reverse proxy chain that serves the rest of argobox.com. No special networking required. Just another Docker container with a Traefik label.
Why Bother?
Honest answer? Because itโs cool.
I have the hardware. I have the infrastructure. I have the Proxmox API sitting right there, waiting to be automated. The build swarm already proved that I can orchestrate containers programmatically. The Lab Engine is just the build swarm philosophy pointed outward โ instead of creating containers for me to compile packages, Iโm creating containers for you to play in.
But thereโs a practical angle too. Interactive demos are memorable. A recruiter who SSHโd into a real Linux box from a portfolio site is going to remember that. A hiring manager who ran docker build inside a nested container on someoneโs homelab is going to think about it. Itโs a conversation starter that a bullet point on a resume can never be.
And beyond the career angle, itโs genuinely useful for teaching. Anyone can try Linux without installing anything. No WSL setup. No VM configuration. No โdownload this ISO and figure out VirtualBox.โ Click a button, get a terminal. Thatโs it.
I keep coming back to the same thought: the infrastructure is already running. Itโs already paid for. Itโs already burning watts in my basement 24/7. Making it useful to other people โ even temporarily, even for 60 minutes at a time โ feels like the right thing to do with it.
The Stack
| Component | Technology |
|---|---|
| API Server | Python + FastAPI |
| Container Orchestration | proxmoxer (Proxmox VE API) |
| Terminal Proxy | WebSocket + xterm.js |
| Session Management | In-memory with state persistence |
| Containerization | Docker (Lab Engine itself) |
| Reverse Proxy | Traefik |
| External Access | Cloudflare Tunnel + Zero Trust |
| Host | Altair-Link (10.42.0.199) |
| Proxmox | Izar-Orchestrator (10.42.0.201) |
One repository. One Docker container. One API. Five lab templates. Sixty minutes per session. Unlimited potential for visitors to break things in ways I havenโt imagined yet.
That last part is the fun part.
The Lab Engine is live on argobox.com. Go spin up a container. Break something. Itโll be gone in an hour anyway.
Related Posts:
- 12 Years of Homelab Evolution โ The infrastructure that makes this possible
- How I Solved Gentooโs 40-Hour Compile Problem โ The build swarm that shares resources with the Lab Engine