Skip to main content
Admin Modules

OpenClaw Gateway

OpenClaw dashboard, configuration management, cron scheduling, skill marketplace, session inspector, RAG knowledge base browser, and gateway log viewer

February 23, 2026

OpenClaw Gateway

The OpenClaw admin pages live under /admin/openclaw/* and provide management interfaces for the OpenClaw AI agent framework. OpenClaw is the secondary AI backend (alongside Argonaut) and focuses on tool use, skill execution, and scheduled automation. The admin section covers seven pages: a Dashboard landing page, Config, Cron, Skills, Sessions, RAG Browser, and Logs.

Config (/admin/openclaw/config)

The config page provides a viewer and editor for the OpenClaw runtime configuration. The configuration controls model selection, tool access, safety constraints, logging levels, and integration settings.

Configuration Retrieval

Configuration is loaded through the openclawGetConfig() client-side function, which calls the OpenClaw API and returns the full configuration object. The config object is a deeply nested JSON structure covering:

  • model — the default model and provider for OpenClaw interactions
  • tools — enabled tool list with per-tool permission flags
  • safety — content filtering rules, rate limits, and restricted operation lists
  • logging — log level, output destination, and retention settings
  • integrations — connections to external services (Gitea, Arcturus-Prime API, build swarm gateway)
  • memory — context window management, conversation history limits, and memory compression settings

Formatted View

The default view renders the configuration as a structured, human-readable layout with collapsible sections. Each top-level key gets its own section card with a header showing the key name and a chevron toggle. Expanding a section reveals the nested values in a formatted display: strings as text, booleans as toggles, numbers as editable inputs, and arrays as tag-style lists.

Editing is inline — clicking a value makes it editable. Changes are staged locally and a “Save Changes” button appears when there are pending modifications. Saving calls openclawUpdateConfig() which writes the updated configuration through the OpenClaw API.

Raw JSON View

A toggle switches between the formatted view and a raw JSON editor. The raw view renders the full configuration as syntax-highlighted JSON in a code editor. This is useful for bulk edits, copy-paste operations, and verifying the exact structure being sent to the API. The raw editor validates JSON syntax on save and rejects malformed input.

Configuration Validation

Both views validate configuration changes before saving. Validation checks include:

  • Required fields are present (model, tools, safety)
  • Model identifiers match known providers
  • Tool names match the installed skill registry
  • Rate limit values are positive integers
  • Log level is one of: debug, info, warn, error

Validation errors appear inline next to the offending field in formatted view, or as a summary panel below the editor in raw view.

Cron (/admin/openclaw/cron)

The cron page manages scheduled jobs that OpenClaw executes automatically. Cron jobs enable recurring automation: scheduled content checks, periodic security scans, regular RAG index rebuilds, and timed notifications.

Job List

The main view displays all configured cron jobs in a table with columns for:

  • Name — descriptive job identifier
  • Schedule — cron expression (e.g., 0 */6 * * * for every 6 hours)
  • Skill — the OpenClaw skill that executes when the job triggers
  • Last Run — timestamp of the most recent execution
  • Status — last run result: success, failed, or skipped
  • Next Run — computed next execution time based on the cron expression
  • Actions — enable/disable toggle and delete button

Jobs are managed through three client-side functions:

  • openclawCronList() — fetches all configured cron jobs with their current status and execution history
  • openclawCronAdd() — creates a new cron job with the specified name, schedule, skill, and parameters
  • openclawCronRemove() — deletes a cron job by its identifier

Add Job Form

A collapsible form at the top of the page (collapsed by default) allows creating new cron jobs. The form fields are:

  • Job name — a unique identifier for the job (alphanumeric and dashes)
  • Cron expression — standard five-field cron syntax. A helper displays a human-readable description of the expression as you type (e.g., “At minute 0 past every 6th hour” for 0 */6 * * *). Common presets are available: every hour, every 6 hours, daily at midnight, weekly on Monday, monthly on the 1st.
  • Skill — dropdown populated from the installed skills list. Selecting a skill reveals its parameter schema.
  • Parameters — dynamic form fields generated from the selected skill’s parameter schema. Each parameter shows its type, description, and default value.
  • Enabled — toggle to create the job in enabled or disabled state

Execution History

Clicking a job row expands an execution history panel showing the last 20 runs. Each run entry shows the start time, duration, exit status, and a truncated output log. Clicking “View Full Log” opens the complete execution output in a modal.

Skills (/admin/openclaw/skills)

The skills page manages the OpenClaw skill ecosystem. Skills are modular capabilities that extend what OpenClaw can do — each skill defines a set of tools, parameters, and execution logic. The page is organized into three tabs.

Tab 1: Installed Skills

The installed tab lists all skills currently available in the OpenClaw instance. Each skill card displays:

  • Name — the skill identifier
  • Version — currently installed version
  • Description — what the skill does
  • Tools provided — list of tools this skill makes available
  • Author — who created the skill
  • Status — active, disabled, or errored

Each card has an “Uninstall” button that removes the skill after confirmation. Uninstalling a skill that is referenced by active cron jobs shows a warning listing the affected jobs. Disabled skills remain installed but their tools are not available to the agent.

The ClawHub tab provides a search interface for the ClawHub skill registry — a public repository of community-created OpenClaw skills. The search supports:

  • Text search — full-text search across skill names, descriptions, and tags
  • Category filter — filter by skill category: content, security, infrastructure, automation, integration, utility
  • Sort — sort results by relevance, popularity (install count), rating, or recency

Search results display as cards with the skill name, description, author, install count, rating, and an “Install” button. Clicking a card expands it to show the full skill details: README, parameter schema, tool list, required permissions, and version history.

Installing a skill from ClawHub downloads the skill package, validates its integrity, registers it with the OpenClaw runtime, and makes its tools immediately available. The installed skills tab updates to reflect the new addition.

Tab 3: Create Skill

The create tab provides an AI-assisted skill creation workflow. This lets administrators build custom skills without writing code from scratch.

Skill Definition

The creation form collects:

  • Skill name — unique identifier following the naming convention (lowercase, dashes)
  • Description — what the skill should do
  • Tools — define the tools this skill provides, including parameter schemas and return types
  • Permissions — what the skill needs access to (filesystem, network, API endpoints)
  • Trigger — how the skill is invoked (manual, cron, event, tool-call)

AI Generation

After filling in the definition, the “Generate Skill” button sends the specification to the AI which produces the full skill implementation: execution logic, parameter validation, error handling, and output formatting. The generated code appears in a code editor for review and modification before saving.

Security Scanning

Before a generated skill can be installed, it passes through an automated security scan. The scan checks for:

  • Dangerous operations — file deletion, system commands, network requests to unexpected hosts
  • Permission escalation — attempts to access resources beyond the declared permissions
  • Data exfiltration — patterns that could leak sensitive data (credentials, tokens, internal IPs)
  • Code injection — eval(), exec(), or other dynamic code execution patterns

Risk Assessment

The scan produces a risk assessment with a rating: low, medium, high, or critical. Each finding includes a description, the offending code location, and a severity rating. Skills rated medium or below can be installed directly. Skills rated high require explicit admin confirmation. Skills rated critical cannot be installed until the flagged issues are resolved.

The risk assessment is displayed as a color-coded summary card with an expandable findings list. Administrators can review each finding, mark it as accepted (false positive), or edit the skill code to address the concern before re-scanning.

Sessions (/admin/openclaw/sessions)

The sessions page provides a live view of active OpenClaw sessions and configured agents. It surfaces data from the existing sessions_list and agents_list management API actions — no additional backend is needed.

Configured Agents

The top section displays all agents registered in the OpenClaw instance. Each agent card shows:

  • Status dot — green indicator showing the agent is registered
  • Name — the agent’s display name (e.g., “Deep Research”, “Cron Runner”)
  • Model — the primary model assigned to the agent (e.g., claude-sonnet-4-5-20250929)
  • ID badge — the agent’s internal identifier

Agent data is fetched via openclawAgentsList() from the manage client. If the OpenClaw backend is unreachable, the section shows an offline message instead.

Active Sessions

The lower section lists currently active sessions. Each session card includes:

  • Session ID — displayed in a purple monospace badge for easy identification
  • Agent name — which agent is handling the session
  • Started — when the session began, shown as relative time (e.g., “2h ago”)
  • Last activity — most recent interaction timestamp with color-coded age badges:
    • Green (fresh) — activity within the last 5 minutes
    • Amber (stale) — activity within the last hour
    • Red (old) — no activity for over an hour
  • Extra fields — any additional session metadata (model overrides, tool counts, etc.) rendered as tag chips

Refresh

The header includes a manual refresh button that re-fetches both the agents and sessions lists simultaneously. There is no auto-polling — refresh is on-demand to avoid unnecessary API load.

RAG Browser (/admin/openclaw/rag)

The RAG Browser provides a search interface and statistics dashboard for the Arcturus-Prime knowledge base. The knowledge base contains 166,000+ text chunks indexed from Obsidian vaults, blog posts, documentation, and configuration files. The page proxies all requests through /api/admin/rag to the local RAG API running on port 8101.

Connection Status

The header shows a live connection indicator:

  • Green dot + “Online” — RAG API is reachable (checked via /health endpoint)
  • Red dot + “Offline” — API is unreachable or returned an error

Stats Cards

Four summary cards display at the top when the API is online:

  • Stores — number of vector stores (typically 2: main and blog)
  • Total Chunks — aggregate chunk count across all stores and collections
  • Collections — number of distinct collections (source groupings like “journal”, “docs”, “infrastructure”)
  • Uptime — how long the RAG API has been running

Stats are fetched via the rag_stats action which proxies to GET /stats on the RAG API with bearer authentication.

Search Sandbox

The search card provides a query interface with three controls:

  • Query input — free-text search query sent as a semantic similarity search against the vector embeddings
  • Store selector — choose between main (private, full knowledge base) and blog (sanitized, public-safe content)
  • Top K — number of results to return (1–50, default 10)
  • Collection filter — dropdown populated from the stats response, allowing searches restricted to a specific collection (e.g., only “journal” entries or only “infrastructure” docs)

Search results are rendered as ranked cards, each showing:

  • Rank — position in the result set (cyan badge)
  • Title — document or chunk title
  • Source — file path or URL the chunk was indexed from
  • Collection — which collection the chunk belongs to
  • Score — cosine similarity score (amber badge, higher is better)
  • Text snippet — the first ~300 characters of the matched chunk content

Search is performed via the rag_search action which proxies POST /search to the RAG API with the query, store, top_k, and optional collection parameters.

Collection Table

Below the search area, a sortable table lists all collections with:

  • Collection name — clickable to filter search results to that collection
  • Store — which store the collection belongs to
  • Documents — number of source documents in the collection
  • Chunks — total chunk count for that collection

Column headers are clickable to sort ascending/descending. The table provides a quick overview of knowledge base composition and helps identify which collections have the most indexed content.

API Architecture

All RAG requests route through the server-side /api/admin/rag endpoint, which handles three actions:

  • rag_health — proxies GET /health (no authentication required)
  • rag_stats — proxies GET /stats with RAG_API_SECRET bearer token
  • rag_search — proxies POST /search with bearer token and search parameters

The RAG_API_SECRET environment variable is read server-side via getEnv() and never exposed to the browser. The proxy adds a 15-second timeout to all upstream requests.

Logs (/admin/openclaw/logs)

The logs page is a dedicated full-screen viewer for OpenClaw gateway logs. While the dashboard includes a compact log preview, this page provides more space and filtering tools for debugging sessions.

Controls

The control bar at the top provides:

  • Filter input — free-text filter passed to the get-logs API action. Matches are server-side, so filtering happens on the raw log data before it reaches the browser. Useful for narrowing to a specific agent name, error pattern, or cron job.
  • Line count selector — choose how many log lines to fetch: 50, 100, 200, or 500. Higher counts are useful for tracing multi-step agent workflows but take longer to load.
  • Refresh button — manually re-fetches logs with the current filter and line count.
  • Auto-refresh toggle — enables a 30-second polling interval. When active, the button turns green. The timer is automatically cleaned up when navigating away via View Transitions (astro:before-swap event).

Level Filtering

Four toggle buttons control which log levels are visible:

  • ERROR (red dot) — exceptions, failed API calls, skill execution failures
  • WARN (amber dot) — 404 responses, missing endpoints, rate limit warnings
  • INFO (blue dot) — normal operational messages, session starts, tool invocations
  • DEBUG (gray dot) — verbose diagnostic output, request/response payloads

Each button toggles its level on or off. Toggling is client-side — all log lines are fetched once, then shown or hidden based on the active level set. This allows quick switching between views without re-fetching.

Log Classification

Each log line is classified into a level using a two-pass approach:

  1. JSON parsing — if the line parses as JSON, the _meta.logLevelName field determines the level. JSON entries are reformatted as [HH:MM:SS] [subsystem] message for readability.
  2. Regex fallback — for plain-text lines, pattern matching identifies the level: /error|ERR/i → error, /warn|WARN|404/i → warn, /debug|DEBUG/i → debug, everything else → info.

Lines are rendered as <span> elements with data-level attributes for CSS coloring and JavaScript filtering.

Stats Bar

A summary bar below the log output displays:

  • Visible / total lines — how many lines are shown after level filtering vs. how many were fetched
  • Error count — shown in red if any errors are present
  • Warning count — shown in amber if any warnings are present

API

Logs are fetched via openclawGetLogs(lines, filter) from the manage client, which calls the get-logs action on /api/admin/openclaw-manage. The function returns { logs: string, lineCount: number, ok: boolean }. If the gateway is unreachable, the page shows a graceful offline message rather than an error.

openclawclawconfigcronskillsclawhubai-skillsgatewaysessionsragagentslogs