Skip to main content
Features

Public Status Page

Real-time infrastructure status dashboard with 24-hour, 7-day, and 30-day uptime views, per-service monitoring, and incident timeline

February 28, 2026

Public Status Page

The /status route serves a comprehensive, real-time infrastructure health dashboard visible to all visitors. It pulls data from Uptime Kuma via proxied API endpoints and persists historical data in the browser’s localStorage for long-term tracking beyond what the API provides.

Architecture

Browser (status page)

  ├── /api/uptime-kuma/status-page/public          → Uptime Kuma config + heartbeats
  ├── /api/uptime-kuma/status-page/heartbeat/public → Per-monitor heartbeat lists
  ├── /api/kuma-history/history/hourly?days=N       → Server-side hourly aggregates
  ├── /api/kuma-history/history/services?days=N     → Per-service daily snapshots
  └── /api/status/ai-services                       → Ollama status check

All Uptime Kuma data is proxied through Cloudflare Workers or the Astro dev server (configured in astro.config.mjs). The browser never connects to Uptime Kuma directly.

Three Time Views

The status page has three tabbed views, each handled by its own Astro component:

24-Hour View (default)

Component: ContributionGrid.astro

  • 288 cells (5-minute intervals over 24 hours) displayed as a horizontal timeline bar
  • Color levels: level-0 (gray/no data), level-1 (red/down), level-2 (amber/degraded), level-3 (light green/95%+), level-4 (bold green/99.5%+)
  • Stats: average uptime, best streak, total incidents
  • Seeds from kuma-history/hourly?days=2 to fill gaps when browser was closed
  • localStorage key: Arcturus-Prime-uptime-5min-v3 (retains 31 days of 5-min slots)
  • Refreshes every 5 minutes

Also shown in the 24-hour view:

  • AIServiceStatus — Ollama online/offline, model count, latency
  • ServiceDashboard — All infrastructure monitors grouped by category with 24-hour uptime bars
  • TimelineView — Recent incident timeline

7-Day View

Component: WeekView.astro

  • 7 rows (one per day), each with 24 hourly cells
  • Aggregates 5-min localStorage data into hourly buckets per day
  • Per-service weekly status table below the grid (daily uptime cells + 7-day average)
  • Seeds from kuma-history/hourly?days=8 and kuma-history/services?days=8
  • Weekly service snapshots stored in localStorage key: Arcturus-Prime-weekly-service-v1
  • Stats: 7-day average, best day, total incident hours
  • Refreshes every 5 minutes

30-Day View

Component: MonthView.astro

  • 30 calendar-style cells (one per day) with date numbers and uptime percentages
  • Aggregates all 5-min localStorage slots for each day into daily summaries
  • Per-service monthly horizontal bar chart with uptime percentages
  • Seeds from kuma-history/hourly?days=31 to backfill data
  • Today highlighted with purple outline
  • Stats: 30-day average, best streak (consecutive days at 99%+), incident days, days tracked
  • Refreshes every 10 minutes

Data Persistence Strategy

The status page uses a layered data strategy:

  1. Server-side seeding — On load, fetches hourly aggregates from kuma-history API (up to 31 days). This fills gaps from periods when the browser wasn’t open.

  2. Live heartbeat data — Fetches real-time 5-minute heartbeats from Uptime Kuma for the current window. This provides the highest granularity.

  3. localStorage persistence — All data is merged into localStorage with 31-day retention. Newer data from live heartbeats takes precedence over server-side seeds for overlapping time periods.

  4. Aggregate gap-filling — For 5-minute slots with no data, the 24-hour aggregate uptime percentage from Uptime Kuma is used to infer status, preventing misleading gray bars.

ServiceDashboard Component

Component: ServiceDashboard.astro

Shows all Uptime Kuma monitors grouped by category with real-time status indicators and 24-hour uptime bars.

Monitor Groups and Icons

GroupIconNotes
Public ServicesGlobeUser-facing services
Internal ServicesWrenchBackend infrastructure
HypervisorsMonitorProxmox hosts
NAS StorageDiskStorage arrays
Network InfrastructureGlobeRouters, switches
Media ServicesTVPlex, Tautulli
WorkstationsLaptopCollapsed by default
Build Swarm - DronesRobotCollapsed by default
Build Swarm - OrchestratorsTargetCollapsed by default
Build Swarm - GatewayDoorCollapsed by default

Uptime Calculation

The overall uptime percentage (displayed in UptimeHero) excludes:

  • Workstations (personal machines, not end-user-impacting)
  • Build Swarm - Orchestrators (internal build infrastructure)
  • Decommissioned monitors (NAS: Rigel, NAS: Mobius, etc.)
  • Hidden monitors (Vault/Bitwarden — security-sensitive)

Galactic Identity Sanitization

All monitor names are sanitized through nameOverrides maps that apply the Galactic Identity System (see CLAUDE.md). Real hostnames and internal naming are never exposed to the public.

Key mappings:

  • drone-Tau-HostDrone: Tau
  • Proxmox: Tarn-HostHypervisor: Tarn
  • NAS: Meridian-HostNAS: Meridian
  • Workstation: Capella-OutpostWorkstation: Capella

Collapsed Groups

Workstations and all Build Swarm groups are collapsed by default on the public page. They are still visible and expandable, but don’t clutter the initial view since they don’t impact end-user services.

Consistency Requirements

Three components share uptime filtering logic and must stay in sync:

ConstantContributionGridWeekViewServiceDashboard
EXCLUDED_GROUPSWorkstations, Build Swarm - OrchestratorsWorkstations, Build Swarm - OrchestratorsuptimeExcludedGroups (same)
DECOM_NAMESMobius-Silo, Mobius, NAS: Synology Milky Way, NAS: Milky Way, NAS: Rigel, NAS: MobiusSamestatusOverrides (same effect)
HIDDEN_NAMESVault, Bitwarden, Vaultwarden, Password ManagerSamehiddenMonitors (same)
nameOverridesN/AFull mapFull map

When adding or decommissioning monitors, update all three components.

Color Thresholds

LevelColorMeaningThreshold
4Bold greenFully operational>= 99.5%
3Light greenMinor issues>= 95%
2AmberDegraded> 0%, or degraded status
1RedDowndown status
0GrayNo dataNo data available

Key Files

FilePurpose
src/pages/status.astroMain status page with tab switching
src/components/status/ContributionGrid.astro24-hour 5-min timeline bar
src/components/status/WeekView.astro7-day hourly grid + per-service weekly
src/components/status/MonthView.astro30-day calendar grid + per-service monthly
src/components/status/ServiceDashboard.astroAll monitors grouped with uptime bars
src/components/status/UptimeHero.astroHero section with overall uptime %
src/components/status/TimelineView.astroRecent incident timeline
src/components/status/AIServiceStatus.astroOllama status widget
src/components/status/ResponseTimeChart.astroResponse time visualization

View Transitions

All components use astro:page-load (NOT DOMContentLoaded) for initialization, which is required for the Astro Client Router / View Transitions. Each component also registers cleanup via astro:before-swap to clear intervals when navigating away.

statusuptimemonitoringuptime-kumainfrastructure