Argo Knowledge RAG
Local semantic search for your knowledge base
Because grep doesn't understand what you mean
The Problem
3,000+ markdown files across multiple Obsidian vaults. Conversations, debugging sessions, technical documentation, legal notes. Finding anything meaningful with keyword search is nearly impossible when you're looking for "that conversation about Docker networking where I fixed the bridge issue" and all you remember is it was sometime last summer.
Quick Start
How It Works
Ingest
Reads markdown files, extracts frontmatter, respects folder structure
Chunk
Smart splitting that respects paragraph and code block boundaries
Embed
GPU-accelerated sentence-transformers generate vector embeddings
Search
ChromaDB cosine similarity finds semantically relevant content
Features
100% Local
All processing on your machine. Data never leaves. No cloud APIs, no subscriptions.
GPU Accelerated
~35 minutes to index 3,000 files on RTX 4070 Ti. Search results in under 100ms.
Obsidian Native
Extracts frontmatter, preserves folder structure, understands vault organization.
Docker Ready
One-command deployment. Docker Compose included with GPU passthrough support.
Multiple Interfaces
CLI for quick searches, Web UI for browsing, REST API for integration.
Smart Chunking
Respects paragraph and code block boundaries. No mid-sentence splits.
Performance
| Hardware | Index Time (3000 files) | Search Latency |
|---|---|---|
| RTX 4070 Ti | ~35 minutes | <100ms |
| CPU Only | ~2 hours | <150ms |
Why This Exists
The Alternatives
- OpenWebUI's knowledge feature chokes on large file counts
- Obsidian search is keyword-only — useless for semantic queries
- Cloud RAG services require sending your data externally
- ChatGPT/Claude can't index your local files continuously
Argo Knowledge RAG
- Runs entirely local — uses your GPU
- Handles thousands of files without choking
- Natural language queries, not just keywords
- Re-index on demand as your vault grows