Update January 2026: This experiment was the seed that grew into the Build Swarm. While I eventually hit the limits of K3s on low-memory hardware (the infamous OOM crisis of January 2026), the lessons learned here about declarative infrastructure were foundational. See How I Solved Gentoo’s Compile Problem for where this journey led.
The K3s Experiment
“How hard can Kubernetes be?”
These were my famous last words before diving into K3s — Rancher’s lightweight Kubernetes distribution. I had a vision: a self-healing, declarative infrastructure where I never had to SSH into a server to restart a service again.
I also had a constraint: modest hardware.
- Master Node:
Altair-Link(Intel NUC, 16GB RAM) - Worker Node:
Arcturus-Prime(VM on Proxmox, 8GB RAM)
The Installation
K3s actually lives up to its “lightweight” promise during installation. It’s a single binary.
# On Master
curl -sfL https://get.k3s.io | sh -s - server \
--disable traefik \
--write-kubeconfig-mode 644
# On Worker
curl -sfL https://get.k3s.io | K3S_URL=https://10.42.0.199:6443 K3S_TOKEN=... sh -
Within 30 minutes, I had a functional cluster. kubectl get nodes showed two Ready nodes. I felt like a cloud commander.
The First Challenge: Database Migration
My first goal was to migrate OpenWebUI from a local SQLite file (which locks during concurrent writes) to a proper PostgreSQL database running in the cluster.
I wrote my first Deployment manifest. I learned about PersistentVolumeClaims (PVCs) the hard way — by restarting a pod and watching my data vanish.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Once I bound the storage, I updated OpenWebUI’s environment variable:
DATABASE_URL: postgresql://user:pass@postgres-service:5432/openwebui
It worked. Suddenly, multiple users could chat with the AI model simultaneously without locks.
The Networking Trap
Then came the URL drama.
OpenWebUI needs to talk to Ollama (my LLM runner).
- Attempt 1:
http://localhost:11434-> Failed. (Containers have their own localhost). - Attempt 2:
http://10.42.0.100:11434-> 404 Error. (Why?) - Attempt 3:
http://10.42.0.100:11434/api-> Success!
I spent 2 hours debugging a missing /api suffix. This taught me that in Kubernetes, networking is 90% of the battle.
The Memory Leak (Foreshadowing)
Everything ran great for a month. But looking back at my logs, I see the warning signs.
Arcturus-Prime (the worker with 8GB RAM) was sitting at 85% memory usage.
K3s is lightweight, but the workloads aren’t. Java apps, databases, and AI inference engines eat RAM for breakfast.
This experiment set the stage for the massive OOM Crisis of Jan 2026 (which deserves its own post), but in late 2023, I was blissfully happy with my new cluster.
Lessons Learned
- Latency Matters: Running a cluster on WiFi is a bad idea. (I moved to Ethernet immediately).
- Storage is Hard: Local path provisioning ties a pod to a specific node. If that node dies, the data is trapped.
- YAML is Verbose: I wrote 400 lines of config to replace one
docker runcommand. But the declarative nature is worth it.
Kubernetes is a beast, even the “lightweight” version. But it’s a beast worth taming.