Workers & KV Store
Cloudflare Workers runtime and KV namespace configuration for API routes, caching, and state management
Workers & KV Store
All server-side logic in Arcturus-Prime runs as Cloudflare Workers via Pages Functions. Every API route, every SSR page render, and every authenticated admin request executes in the Workers runtime at Cloudflare’s edge. State that needs to persist across requests lives in Workers KV — a globally distributed key-value store.
Workers Runtime
How Pages Functions Work
Any .ts or .astro file in src/pages/ that does not export prerender = true becomes a Cloudflare Pages Function. At build time, the @astrojs/cloudflare adapter bundles these into a single _worker.js file that Cloudflare deploys to every edge location.
The Workers runtime is V8-based (not Node.js). It supports most Web APIs natively but has specific constraints:
- No filesystem access (
fs,path— not available) - No native modules (
better-sqlite3, etc.) - 1MB compressed bundle size limit
- 10ms CPU time per request (unbound for paid plans)
- 128MB memory limit
Compatibility Flags
The project uses these Workers compatibility flags:
compatibility_flags = ["nodejs_compat", "disable_nodejs_process_v2"]
| Flag | Purpose |
|---|---|
nodejs_compat | Enables Node.js built-in module polyfills (Buffer, crypto, stream, util, etc.) |
disable_nodejs_process_v2 | Prevents the newer process global from being injected, which can conflict with some libraries |
These are set in the Cloudflare Pages dashboard under Settings > Functions > Compatibility flags.
Bundle Exclusions
Two packages are explicitly excluded from the SSR bundle to avoid Workers runtime errors:
// astro.config.mjs (vite config section)
vite: {
ssr: {
external: ['better-sqlite3', '@argonaut/core'],
},
}
- better-sqlite3 — Native SQLite3 binding used for local development. In production, all persistence goes through KV.
- @argonaut/core — Local workspace package with Node.js-specific dependencies. Its functionality is accessed via API proxy routes instead.
Accessing the Runtime
Every SSR page and API route can access the Cloudflare runtime through Astro’s locals:
// In any .astro page or API route
export const prerender = false;
const runtime = Astro.locals.runtime;
const env = runtime.env; // Environment variables + bindings
const ctx = runtime.ctx; // Execution context (waitUntil, etc.)
const cf = runtime.cf; // CF-specific request properties
KV Namespace
Configuration
Arcturus-Prime uses a single KV namespace for all persistent data:
| Property | Value |
|---|---|
| Namespace name | ARGOBOX_CACHE |
| Binding name | ARGOBOX_CACHE |
| Namespace ID | 90c6787af17943968247ebd46744ee40 |
The binding is configured in the Cloudflare Pages dashboard under Settings > Functions > KV namespace bindings. The binding name ARGOBOX_CACHE is how the code references it:
const kv = env.ARGOBOX_CACHE;
// Write
await kv.put('some-key', JSON.stringify(data));
// Read
const raw = await kv.get('some-key');
const data = raw ? JSON.parse(raw) : null;
// Delete
await kv.delete('some-key');
// List keys with prefix
const list = await kv.list({ prefix: 'data:' });
Key Schema
KV keys follow a namespaced convention using colons as separators:
| Key Pattern | Type | Description |
|---|---|---|
data:user-roles | JSON | User role assignments, permissions, dashboard profiles |
cache:status:* | JSON | Cached infrastructure status responses |
cache:services:* | JSON | Cached service health data |
session:* | JSON | AI conversation sessions |
ai:conversation:* | JSON | Persisted AI chat history |
rate:* | Number | Rate limiting counters |
The User Roles Key
The most critical KV entry is data:user-roles. This single JSON document contains the entire user authorization database:
{
"users": {
"[email protected]": {
"email": "[email protected]",
"role": "admin",
"displayName": "Commander",
"services": ["*"],
"features": ["*"],
"sites": ["*"],
"dashboardProfiles": [
{
"id": "default",
"name": "Command Center",
"widgets": ["system-health", "docker-status", "storage", "network"]
}
]
}
},
"roles": {
"admin": {
"permissions": ["*"],
"description": "Full access to all features and services"
},
"member": {
"permissions": ["view:dashboard", "use:chat", "view:status"],
"description": "Limited access to specific features"
},
"demo": {
"permissions": ["view:dashboard", "view:status"],
"description": "Read-only tour of the platform"
}
}
}
This key is read on every authenticated request to resolve the user’s role and permissions. It is managed through the admin API at /api/auth/roles.
Caching Strategy
Stale-While-Revalidate
Arcturus-Prime uses a stale-while-revalidate caching pattern for expensive data fetches. The flow:
Request comes in
│
▼
Check in-memory cache (Map)
│
├── HIT + FRESH (< 60s) ──→ Return cached data immediately
│
├── HIT + STALE (> 60s) ──→ Return cached data immediately
│ + trigger background revalidation
│
└── MISS ──→ Fetch from source
Store in memory cache (60s TTL)
Store in KV (longer TTL)
Return fresh data
The in-memory cache lives in the Worker’s global scope and persists across requests within the same isolate. It uses a 60-second TTL. KV serves as the second-tier cache with longer TTLs (5-15 minutes depending on the data type).
// Simplified cache implementation
const memoryCache = new Map<string, { data: any; timestamp: number }>();
const MEMORY_TTL = 60_000; // 60 seconds
async function getCached(key: string, fetcher: () => Promise<any>, env: Env) {
const now = Date.now();
const cached = memoryCache.get(key);
if (cached && now - cached.timestamp < MEMORY_TTL) {
return cached.data; // Fresh hit
}
if (cached) {
// Stale -- return immediately but revalidate in background
revalidate(key, fetcher, env);
return cached.data;
}
// Miss -- fetch, cache, return
const data = await fetcher();
memoryCache.set(key, { data, timestamp: now });
await env.ARGOBOX_CACHE.put(`cache:${key}`, JSON.stringify(data));
return data;
}
Cache Warmup
A dedicated endpoint pre-populates caches for critical data so the first visitor after a deployment does not hit cold caches:
POST /api/cache/warmup
Authorization: Bearer <CACHE_WARMUP_SECRET>
This endpoint is designed to be called by a cron trigger (Cloudflare Cron Triggers or an external scheduler). It fetches and caches:
- Infrastructure status from all service endpoints
- User roles from KV
- Service health checks
- Dashboard widget data
The endpoint is protected by a shared secret (CACHE_WARMUP_SECRET environment variable) to prevent abuse.
Cache Status
The cache status endpoint provides visibility into what is currently cached:
GET /api/cache/status
Returns:
{
"memoryCache": {
"keys": 12,
"oldestEntry": "2026-02-23T10:00:00Z",
"newestEntry": "2026-02-23T10:05:30Z"
},
"kvKeys": [
"cache:status:services",
"cache:status:docker",
"data:user-roles"
]
}
Rate Limiting
Public-facing API endpoints implement rate limiting to prevent abuse. Rate state is stored in KV with short TTLs:
const RATE_LIMIT = 10; // requests
const RATE_WINDOW = 60_000; // per 60 seconds
async function checkRateLimit(ip: string, env: Env): Promise<boolean> {
const key = `rate:${ip}`;
const current = await env.ARGOBOX_CACHE.get(key);
const count = current ? parseInt(current) : 0;
if (count >= RATE_LIMIT) {
return false; // Rate limited
}
await env.ARGOBOX_CACHE.put(key, String(count + 1), {
expirationTtl: 60, // Auto-expire after 60 seconds
});
return true;
}
Rate-limited endpoints return 429 Too Many Requests with a Retry-After header.
Rate-Limited Endpoints
| Endpoint | Limit | Window |
|---|---|---|
POST /api/contact | 5 requests | 60 seconds |
POST /api/public/chat | 10 requests | 60 seconds |
POST /api/auth/* | 20 requests | 60 seconds |
Admin endpoints behind Cloudflare Access are not rate-limited since the user is already authenticated.
KV Limitations
Workers KV has specific characteristics to be aware of:
| Property | Value |
|---|---|
| Max value size | 25 MB |
| Max key size | 512 bytes |
| Consistency | Eventually consistent (global propagation in ~60s) |
| Read latency | < 10ms (from nearest edge) |
| Write latency | < 50ms (eventually propagated) |
| List operation | Returns up to 1000 keys per call |
The eventual consistency model means a write in one region may not be immediately visible in another. For Arcturus-Prime this is acceptable — the user roles data changes rarely, and cache entries are designed to tolerate stale reads.
Local Development
In local development (npm run dev), the Cloudflare runtime is simulated via @astrojs/cloudflare’s platformProxy feature, which spins up a local miniflare instance. KV operations work against a local .mf/ directory:
# Local KV data lives in:
.mf/kv/ARGOBOX_CACHE/
# Seed local KV for development:
npx wrangler kv:key put --binding=ARGOBOX_CACHE "data:user-roles" '{"users":{},"roles":{}}' --local
The local proxy provides the same env.ARGOBOX_CACHE API, so code that runs on Workers also runs locally without changes.