Public Status Page
Real-time infrastructure status dashboard with 24-hour, 7-day, and 30-day uptime views, per-service monitoring, and incident timeline
Public Status Page
The /status route serves a comprehensive, real-time infrastructure health dashboard visible to all visitors. It pulls data from Uptime Kuma via proxied API endpoints and persists historical data in the browser’s localStorage for long-term tracking beyond what the API provides.
Architecture
Browser (status page)
│
├── /api/uptime-kuma/status-page/public → Uptime Kuma config + heartbeats
├── /api/uptime-kuma/status-page/heartbeat/public → Per-monitor heartbeat lists
├── /api/kuma-history/history/hourly?days=N → Server-side hourly aggregates
├── /api/kuma-history/history/services?days=N → Per-service daily snapshots
└── /api/status/ai-services → Ollama status check
All Uptime Kuma data is proxied through Cloudflare Workers or the Astro dev server (configured in astro.config.mjs). The browser never connects to Uptime Kuma directly.
Three Time Views
The status page has three tabbed views, each handled by its own Astro component:
24-Hour View (default)
Component: ContributionGrid.astro
- 288 cells (5-minute intervals over 24 hours) displayed as a horizontal timeline bar
- Color levels: level-0 (gray/no data), level-1 (red/down), level-2 (amber/degraded), level-3 (light green/95%+), level-4 (bold green/99.5%+)
- Stats: average uptime, best streak, total incidents
- Seeds from
kuma-history/hourly?days=2to fill gaps when browser was closed - localStorage key:
Arcturus-Prime-uptime-5min-v3(retains 31 days of 5-min slots) - Refreshes every 5 minutes
Also shown in the 24-hour view:
- AIServiceStatus — Ollama online/offline, model count, latency
- ServiceDashboard — All infrastructure monitors grouped by category with 24-hour uptime bars
- TimelineView — Recent incident timeline
7-Day View
Component: WeekView.astro
- 7 rows (one per day), each with 24 hourly cells
- Aggregates 5-min localStorage data into hourly buckets per day
- Per-service weekly status table below the grid (daily uptime cells + 7-day average)
- Seeds from
kuma-history/hourly?days=8andkuma-history/services?days=8 - Weekly service snapshots stored in localStorage key:
Arcturus-Prime-weekly-service-v1 - Stats: 7-day average, best day, total incident hours
- Refreshes every 5 minutes
30-Day View
Component: MonthView.astro
- 30 calendar-style cells (one per day) with date numbers and uptime percentages
- Aggregates all 5-min localStorage slots for each day into daily summaries
- Per-service monthly horizontal bar chart with uptime percentages
- Seeds from
kuma-history/hourly?days=31to backfill data - Today highlighted with purple outline
- Stats: 30-day average, best streak (consecutive days at 99%+), incident days, days tracked
- Refreshes every 10 minutes
Data Persistence Strategy
The status page uses a layered data strategy:
-
Server-side seeding — On load, fetches hourly aggregates from
kuma-historyAPI (up to 31 days). This fills gaps from periods when the browser wasn’t open. -
Live heartbeat data — Fetches real-time 5-minute heartbeats from Uptime Kuma for the current window. This provides the highest granularity.
-
localStorage persistence — All data is merged into localStorage with 31-day retention. Newer data from live heartbeats takes precedence over server-side seeds for overlapping time periods.
-
Aggregate gap-filling — For 5-minute slots with no data, the 24-hour aggregate uptime percentage from Uptime Kuma is used to infer status, preventing misleading gray bars.
ServiceDashboard Component
Component: ServiceDashboard.astro
Shows all Uptime Kuma monitors grouped by category with real-time status indicators and 24-hour uptime bars.
Monitor Groups and Icons
| Group | Icon | Notes |
|---|---|---|
| Public Services | Globe | User-facing services |
| Internal Services | Wrench | Backend infrastructure |
| Hypervisors | Monitor | Proxmox hosts |
| NAS Storage | Disk | Storage arrays |
| Network Infrastructure | Globe | Routers, switches |
| Media Services | TV | Plex, Tautulli |
| Workstations | Laptop | Collapsed by default |
| Build Swarm - Drones | Robot | Collapsed by default |
| Build Swarm - Orchestrators | Target | Collapsed by default |
| Build Swarm - Gateway | Door | Collapsed by default |
Uptime Calculation
The overall uptime percentage (displayed in UptimeHero) excludes:
- Workstations (personal machines, not end-user-impacting)
- Build Swarm - Orchestrators (internal build infrastructure)
- Decommissioned monitors (NAS: Rigel, NAS: Mobius, etc.)
- Hidden monitors (Vault/Bitwarden — security-sensitive)
Galactic Identity Sanitization
All monitor names are sanitized through nameOverrides maps that apply the Galactic Identity System (see CLAUDE.md). Real hostnames and internal naming are never exposed to the public.
Key mappings:
drone-Tau-Host→Drone: TauProxmox: Tarn-Host→Hypervisor: TarnNAS: Meridian-Host→NAS: MeridianWorkstation: Capella-Outpost→Workstation: Capella
Collapsed Groups
Workstations and all Build Swarm groups are collapsed by default on the public page. They are still visible and expandable, but don’t clutter the initial view since they don’t impact end-user services.
Consistency Requirements
Three components share uptime filtering logic and must stay in sync:
| Constant | ContributionGrid | WeekView | ServiceDashboard |
|---|---|---|---|
EXCLUDED_GROUPS | Workstations, Build Swarm - Orchestrators | Workstations, Build Swarm - Orchestrators | uptimeExcludedGroups (same) |
DECOM_NAMES | Mobius-Silo, Mobius, NAS: Synology Milky Way, NAS: Milky Way, NAS: Rigel, NAS: Mobius | Same | statusOverrides (same effect) |
HIDDEN_NAMES | Vault, Bitwarden, Vaultwarden, Password Manager | Same | hiddenMonitors (same) |
nameOverrides | N/A | Full map | Full map |
When adding or decommissioning monitors, update all three components.
Color Thresholds
| Level | Color | Meaning | Threshold |
|---|---|---|---|
| 4 | Bold green | Fully operational | >= 99.5% |
| 3 | Light green | Minor issues | >= 95% |
| 2 | Amber | Degraded | > 0%, or degraded status |
| 1 | Red | Down | down status |
| 0 | Gray | No data | No data available |
Key Files
| File | Purpose |
|---|---|
src/pages/status.astro | Main status page with tab switching |
src/components/status/ContributionGrid.astro | 24-hour 5-min timeline bar |
src/components/status/WeekView.astro | 7-day hourly grid + per-service weekly |
src/components/status/MonthView.astro | 30-day calendar grid + per-service monthly |
src/components/status/ServiceDashboard.astro | All monitors grouped with uptime bars |
src/components/status/UptimeHero.astro | Hero section with overall uptime % |
src/components/status/TimelineView.astro | Recent incident timeline |
src/components/status/AIServiceStatus.astro | Ollama status widget |
src/components/status/ResponseTimeChart.astro | Response time visualization |
View Transitions
All components use astro:page-load (NOT DOMContentLoaded) for initialization, which is required for the Astro Client Router / View Transitions. Each component also registers cleanup via astro:before-swap to clear intervals when navigating away.