The Dashboard That Lied To Me
Date: 2026-01-30 Issue: Pipeline monitor showing fake data via Math.random() Root Cause: Docker layer caching + placeholder code that never got replaced Lesson: Never trust metrics that look “too dynamic”
The Setup
I’ve been building out a NOC-style monitoring dashboard for the build swarm. Real-time log aggregation from Loki. Pipeline flow visualization. Storage metrics. Network throughput. The whole works.
Today’s task: add a homelab infrastructure overview showing all 14 systems across both networks. Standard stuff.
Then I noticed something odd about the storage panel.
The Discovery
Primary orchestrator: 141 GB used, 6012 production binaries.
Refresh
Primary orchestrator: 50 GB used, 5285 production binaries.
Refresh
Primary orchestrator: 380 GB used, 14892 production binaries.
The numbers were completely different every single time. Storage doesn’t work like that. Disk usage doesn’t fluctuate by 300 GB between page loads.
I opened the source and found this gem:
function renderStorage(data) {
const total = 500; // GB
const used = Math.random() * 400; // 💀
const staged = Math.floor(Math.random() * 5000); // 💀
const production = Math.floor(Math.random() * 15000); // 💀
// ...
}
The entire storage panel was Math.random(). Every metric. Every number. Pure fiction.
And it had been like this for… I don’t even want to know how long.
The Network Stats Were Also Fake
Of course they were:
function renderNetworkStats() {
const txRate = (Math.random() * 100).toFixed(2); // 💀
const rxRate = (Math.random() * 100).toFixed(2); // 💀
// ...
}
My “real-time network monitoring” was generating random numbers between 0 and 100 MB/s. Completely meaningless. The dashboard looked great though. Very convincing random data.
”But I Already Fixed That”
Here’s where it gets good.
I immediately wrote proper API endpoints. /api/storage queries the orchestrators for real disk usage. /api/network queries Prometheus for actual node_exporter metrics. Beautiful.
Deployed. Rebuilt the Docker image. Restarted the container. Refreshed the browser.
Numbers still random.
Rebuilt again. Restarted again. Cleared browser cache. Hard refresh.
Still random.
Took a screenshot. Refreshed. Took another screenshot.
Different random numbers in each screenshot. Math.random() was definitely still running.
# What I was running:
docker build -t build-monitor:latest .
docker stop build-monitor && docker rm build-monitor
docker run -d --name build-monitor ...
The problem? Docker layer caching. The template file was in a cached layer from a previous build. My changes weren’t actually in the image, even though I “rebuilt” it.
The Actual Fix
# The magic flag I forgot existed:
docker build --no-cache -t build-monitor:latest .
Then verification that the changes actually deployed:
# Check new function exists
docker exec build-monitor grep -c "renderStorageReal" /app/templates/advanced-monitor.html
# Output: 2 ✅
# Check Math.random is GONE
docker exec build-monitor grep -c "Math.random" /app/templates/advanced-monitor.html
# Output: 0 ✅
After a hard browser refresh (Ctrl+Shift+R), the metrics finally showed real data:
- Primary: 76.3% used (44.96 GB / 58.88 GB)
- Same numbers on every refresh
Revolutionary concept.
The Real APIs
Now the dashboard actually queries real data sources:
Storage Metrics (/api/storage):
def get_storage_metrics(self):
import shutil
stats = shutil.disk_usage(BINHOST_PATH)
return {
'total_gb': round(stats.total / (1024**3), 2),
'used_gb': round(stats.used / (1024**3), 2),
'free_gb': round(stats.free / (1024**3), 2),
'binaries_production': count_packages(BINHOST_PATH),
'binaries_staged': count_packages(STAGING_PATH)
}
Network Metrics (/api/network):
# Queries Prometheus node_exporter:
# rate(node_network_transmit_bytes_total{instance="IP:9100"}[5m])
# rate(node_network_receive_bytes_total{instance="IP:9100"}[5m])
The frontend now fetches from these APIs every 5 seconds instead of generating random numbers.
What Actually Got Built Today
Before the fake data discovery derailed everything, I did add some real features:
Pipeline Flow Monitor:
- Real-time log aggregation from Loki
- Severity classification (critical, error, warning)
- Color-coded entries with timestamps
- Auto-refresh every 10 seconds
Homelab Dashboard:
- 14 systems across Jove (10.42.0.x) and Kronos (192.168.20.x) networks
- System cards categorized by type (workstation, hypervisor, NAS, build-drone)
- Quick actions (SSH copy-to-clipboard, web UI links)
- Per-system metrics when available
The Backend Work
Had to add /api/v1/storage to the swarm orchestrator too. That endpoint didn’t exist - the frontend was always going to show fake data because there was nowhere to get real data from.
# Deploy to orchestrator
scp bin/swarm-orchestrator [email protected]:/opt/build-swarm/bin/
ssh [email protected] 'rc-service swarm-orchestrator restart'
# Verify it works
curl http://10.42.0.201:8080/api/v1/storage | jq
Output:
{
"total_gb": 58.88,
"used_gb": 44.96,
"free_gb": 11.19,
"binaries_production": 0,
"binaries_staged": 0
}
Binary count shows 0 because I’m searching for .tbz2 but the binhost uses .gpkg.tar. That’s tomorrow’s problem.
Lessons Learned
-
Always use
--no-cachewhen debugging Docker deployments - Layer caching will absolutely serve you old files while claiming the build succeeded. -
Test your monitoring with known values - If I had checked whether a single number was accurate instead of admiring the pretty graphs, I would have caught this immediately.
-
Placeholder code is technical debt -
Math.random()was probably meant to be temporary while the backend was built. It wasn’t temporary. -
Random data looks surprisingly convincing - Numbers between 0-100 MB/s for network traffic? Completely plausible. 50-400 GB disk usage? Also plausible. The only tell was the refresh behavior.
Current Monitoring Stack
| Service | Port | Status |
|---|---|---|
| Portal | :8092/ | ✅ Working |
| Dashboard | :8092/dashboard | ✅ Working |
| Control | :8092/control | ✅ Working |
| Pipeline | :8092/pipeline | ✅ Working |
| Logs API | :8092/api/logs | ✅ Real Loki data |
| Storage API | :8092/api/storage | ✅ Real disk data |
| Network API | :8092/api/network | ⚠️ Needs node_exporters |
Six hours of work. Most of it spent rebuilding Docker images that weren’t actually different. And the whole time, the dashboard was confidently displaying random numbers as if they meant something.
I’ve added a comment in the code: ”// NO MATH.RANDOM - EVER - THESE ARE REAL METRICS”
Future me will appreciate that.