Skip to main content
Overview

Deployment Pipeline

How Arcturus-Prime code gets to production - CI/CD pipeline, build swarm deployment, secrets management, and fallback patterns

February 23, 2026

Deployment Pipeline

Arcturus-Prime has three separate deployment pipelines for three targets. The main site flows through Gitea to Cloudflare Pages. The command center deploys via rsync and Docker. The build swarm has its own orchestration layer. Each pipeline is designed so a single git push or ./deploy.sh gets code to production without manual intervention.

Main Site: Arcturus-Prime.com

The Pipeline

┌───────────┐     ┌───────────┐     ┌──────────┐     ┌─────────────┐     ┌────────────┐
│  Local    │     │  Gitea    │     │  GitHub   │     │ Cloudflare  │     │ Cloudflare │
│  Dev      │────▶│  (origin) │────▶│  (mirror) │────▶│ Pages Build │────▶│   CDN      │
│           │     │           │     │           │     │             │     │  (Live)    │
└───────────┘     └───────────┘     └──────────┘     └─────────────┘     └────────────┘
   git push        git.Arcturus-Prime       GitHub           npm run build        Global edge
                   .com/KeyArgo/     Actions           Astro 5.17          deployment
                   Arcturus-Prime           webhook           + Workers

Step by Step

1. Local development

# Dev server with hot reload
npm run dev
# Accessible at http://localhost:4321

# Type checking
npx astro check

# Production build test
npm run build

2. Push to Gitea

git add -A
git commit -m "description of changes"
git push origin main

The primary remote origin points to the self-hosted Gitea instance:

origin  https://git.Arcturus-Prime.com/KeyArgo/Arcturus-Prime.git (fetch)
origin  https://git.Arcturus-Prime.com/KeyArgo/Arcturus-Prime.git (push)

3. Gitea mirrors to GitHub

Gitea has a push mirror configured to github.com/KeyArgo/Arcturus-Prime. On every push to Gitea, it automatically syncs the branch to GitHub. The mirror runs on a short interval (configurable in Gitea repo settings under “Mirror Settings”).

4. Cloudflare Pages auto-build

Cloudflare Pages is connected to the GitHub repo. When the main branch receives a push, Pages triggers an automatic build:

SettingValue
Build commandnpm run build
Build output directorydist/
Node.js version20.x
Framework presetAstro
Root directory/

The build takes approximately 2-3 minutes. Cloudflare deploys to its global CDN automatically on success. Failed builds do not affect the live site.

5. Preview deployments

Non-main branches get preview URLs automatically:

Branch: feature/new-dashboard
Preview: https://feature-new-dashboard.Arcturus-Prime.pages.dev

Useful for testing admin features without affecting production.

Build Configuration

The Astro build configuration in astro.config.mjs:

export default defineConfig({
  output: 'hybrid',           // Static + SSR
  adapter: cloudflare({
    platformProxy: {
      enabled: true,          // Local dev emulates Workers
    },
  }),
  integrations: [
    mdx(),
    tailwind(),
    sitemap(),
  ],
  site: 'https://Arcturus-Prime.com',
});

The hybrid output mode means:

  • Pages are static by default (pre-rendered at build time)
  • Pages/routes marked with export const prerender = false are SSR (run as Workers)
  • API routes in src/pages/api/ are always SSR

Command Center: status.Arcturus-Prime.com

The status monitor is a separate Flask application, not part of the Astro build.

Deployment Script

# From the status-monitor project directory
./deploy.sh

What deploy.sh does:

#!/bin/bash
# 1. Run tests
python -m pytest tests/ -q

# 2. Build Docker image
docker build -t Arcturus-Prime-status:latest .

# 3. Export image
docker save Arcturus-Prime-status:latest | gzip > Arcturus-Prime-status.tar.gz

# 4. Rsync to Altair-Link
rsync -avz Arcturus-Prime-status.tar.gz [email protected]:~/deployments/

# 5. SSH and restart
ssh [email protected] << 'EOF'
  cd ~/deployments
  docker load < Arcturus-Prime-status.tar.gz
  docker compose -f docker-compose.status.yml up -d --force-recreate
EOF

echo "Deployed to status.Arcturus-Prime.com"

The status app runs as a Docker container on Altair-Link (10.42.0.199), exposed through the Cloudflare Tunnel.

Build Swarm

The Gentoo build swarm has its own deployment and orchestration layer, separate from the website.

Architecture

┌───────────────────┐
│   Orchestrator    │
│   orch-Izar-Host         │
│   10.42.0.201      │
│   (on Izar-Host) │
└────────┬──────────┘
         │ distributes packages
    ┌────┴─────────────────────────────────┐
    │            │            │             │
┌───▼─────┐ ┌───▼─────┐ ┌───▼──────┐ ┌────▼──────────┐
│drone-Izar-Host │ │drone-   │ │drone-    │ │drone-         │
│10.42.0.  │ │Tau-Host  │ │Tarn-Host     │ │Meridian-Host      │
│203      │ │10.42.0.  │ │192.168.  │ │192.168.       │
│16 cores │ │175      │ │50.118    │ │50.110         │
│         │ │8 cores  │ │14 cores  │ │20 cores       │
└─────────┘ └─────────┘ └──────────┘ └───────────────┘
                                        (via Tailscale)
ComponentHostIPRole
Orchestratororch-Izar-Host (VM on Izar-Host)10.42.0.201Package discovery, job distribution, state tracking
GatewayAltair-Link10.42.0.199Web UI, package repo serving, build triggers
Drone 1drone-Izar-Host (VM on Izar-Host)10.42.0.20316-core build worker
Drone 2drone-Tau-Host (Tau-Host)10.42.0.1758-core build worker
Drone 3drone-Tarn (VM on Tarn-Host)192.168.20.11814-core build worker (Andromeda, via Tailscale)
Drone 4drone-Meridian-Host (on Meridian-Host)192.168.20.11020-core build worker (Andromeda, via Tailscale)

Total build capacity: 62 cores across 4 drones spanning both networks.

Build Swarm Deployment

The build swarm components are deployed separately from the website:

# On the orchestrator (orch-Izar-Host)
cd /opt/build-swarm
git pull origin main
rc-service build-orchestrator restart

# On each drone
cd /opt/build-swarm
git pull origin main
rc-service build-drone restart

# On the gateway (Altair-Link)
cd /opt/build-swarm
git pull origin main
docker compose -f docker-compose.gateway.yml up -d --force-recreate

The build-swarm CLI tool handles most operational tasks. See the Build Swarm Handbook for details.

Secrets Management

Wrangler Secrets

Production secrets for Cloudflare Workers are managed through Wrangler:

# Set a secret
npx wrangler secret put SECRET_NAME

# List secrets (names only, values hidden)
npx wrangler secret list

# Delete a secret
npx wrangler secret delete SECRET_NAME

Secrets are encrypted at rest in Cloudflare’s infrastructure and injected into the Worker runtime as environment variables accessible via Astro.locals.runtime.env.

Environment Variables

The full list of environment variables used by the Arcturus-Prime Workers:

Authentication

VariablePurposeExample
ADMIN_EMAILSComma-separated admin email addresses[email protected]
CF_ACCESS_TEAM_DOMAINCloudflare Access team domainArcturus-Prime.cloudflareaccess.com
CF_ACCESS_AUDCloudflare Access application audience tagabc123... (64-char hex)

AI Providers

VariablePurpose
OPENROUTER_API_KEYOpenRouter API key (primary AI provider)
ANTHROPIC_API_KEYDirect Anthropic/Claude API key
GOOGLE_GENERATIVE_AI_API_KEYGoogle Gemini API key

Content Backend

VariablePurpose
GITEA_API_URLGitea API base URL (https://git.Arcturus-Prime.com/api/v1)
GITEA_API_TOKENGitea API token for content fetching
GITEA_REPO_OWNERRepository owner (KeyArgo)
GITEA_REPO_NAMERepository name (Arcturus-Prime)

Infrastructure

VariablePurpose
PROXMOX_API_URLProxmox API endpoint
PROXMOX_API_TOKENProxmox API token
DOCKER_API_URLDocker API endpoint on Altair-Link
STATUS_API_URLStatus monitor API endpoint
SWARM_API_URLBuild swarm orchestrator API

Cloudflare KV Bindings

BindingPurpose
AI_CHAT_KVAI conversation history storage
CACHE_KVAPI response cache (system stats, container status)
SESSION_KVUser session data

KV bindings are configured in wrangler.toml:

[[kv_namespaces]]
binding = "AI_CHAT_KV"
id = "abc123..."
preview_id = "def456..."

[[kv_namespaces]]
binding = "CACHE_KV"
id = "ghi789..."
preview_id = "jkl012..."

Local Development

For local development, secrets and bindings are stored in .dev.vars (gitignored):

# .dev.vars (never committed)
ADMIN_EMAILS=[email protected]
CF_ACCESS_TEAM_DOMAIN=Arcturus-Prime.cloudflareaccess.com
CF_ACCESS_AUD=abc123...
OPENROUTER_API_KEY=sk-or-...
ANTHROPIC_API_KEY=sk-ant-...
GITEA_API_URL=https://git.Arcturus-Prime.com/api/v1
GITEA_API_TOKEN=...

The Astro Cloudflare adapter’s platformProxy feature emulates the Workers runtime locally, including KV bindings.

Static Data Fallback Pattern

One of the most important architectural patterns in Arcturus-Prime: when live APIs fail, the site does not break. It falls back to static data with a visual indicator.

How It Works

┌──────────────┐     ┌──────────────┐     ┌──────────────┐
│  API Request │────▶│  Live Data?  │──Y──▶│  Render with │
│  (Worker)    │     │              │      │  green dot   │
└──────────────┘     └──────┬───────┘     └──────────────┘
                            │ N

                     ┌──────────────┐     ┌──────────────┐
                     │  Static      │────▶│  Render with │
                     │  Fallback    │     │  amber dot   │
                     │  (JSON file) │     │  + timestamp │
                     └──────────────┘     └──────────────┘

Implementation

Each admin dashboard component follows this pattern:

async function fetchSystemData() {
  try {
    // Try live API first
    const response = await fetch('/api/admin/systems/status');
    if (!response.ok) throw new Error(`HTTP ${response.status}`);
    const data = await response.json();
    return { data, source: 'live', timestamp: new Date().toISOString() };
  } catch (err) {
    // Fall back to static data
    const fallback = await import('../data/systems-fallback.json');
    return { data: fallback, source: 'static', timestamp: fallback.generated };
  }
}

The UI reflects the data source:

<!-- Data source indicator -->
<span class="inline-flex items-center gap-1.5 text-xs">
  <span class="h-2 w-2 rounded-full"
        :class="source === 'live' ? 'bg-green-400' : 'bg-amber-400'">
  </span>
  <span class="text-slate-400">
    {{ source === 'live' ? 'Live' : `Static (${timestamp})` }}
  </span>
</span>

Static Fallback Data

Fallback JSON files live in src/data/ and are regenerated periodically:

src/data/
├── systems-fallback.json      # Proxmox VM/LXC status
├── containers-fallback.json   # Docker container list
├── storage-fallback.json      # NAS utilization numbers
├── swarm-fallback.json        # Build swarm state
└── network-fallback.json      # Network topology snapshot

These files are updated by a scheduled task (cron on Altair-Link) that snapshots the live API responses and commits them to the repo. When the live APIs are unreachable (homelab down, Tailscale disconnected, tunnel issues), the dashboards still render with the most recent snapshot.

Why This Matters

The homelab goes down. Power outages, kernel panics, bad emerge updates, tunnel daemon crashes — these happen. The fallback pattern means:

  • The public site is never affected (static pages, Cloudflare CDN)
  • Admin dashboards show something instead of a blank screen or error
  • The amber indicator makes it obvious the data is stale
  • No 500 errors, no loading spinners that never resolve
  • When services come back, dashboards automatically switch to live data

This is the same pattern used for the command center, storage manager, Docker dashboard, and build swarm monitor. Every admin component that depends on a backend API has a static fallback.

Deployment Checklist

Quick reference for deploying changes:

Main Site

# 1. Test locally
npm run dev              # Manual smoke test
npm run build            # Verify build succeeds
npx astro check          # Type checking

# 2. Deploy
git add -A
git commit -m "what changed"
git push origin main     # Triggers auto-deploy

# 3. Verify
# Check Cloudflare Pages dashboard for build status
# Visit https://Arcturus-Prime.com to confirm

Status Monitor

# From status-monitor project
./deploy.sh              # Tests, builds, rsyncs, restarts
# Verify: https://status.Arcturus-Prime.com

Environment Changes

# Add/update a secret
npx wrangler secret put NEW_SECRET_NAME
# Redeploy to pick up the change
git commit --allow-empty -m "trigger redeploy for new secret"
git push origin main

Rollback

Cloudflare Pages keeps previous deployments. To rollback:

  1. Go to Cloudflare Dashboard > Pages > Arcturus-Prime
  2. Find the last working deployment
  3. Click “Rollback to this deployment”

The rollback is instant — it just changes which deployment the domain points to.

deploymentci-cdcloudflaregiteabuild-swarmwrangler