The Dev Workflow: How AI Conversations Become Blog Posts

I have a problem. I work on things at 11 PM, solve interesting problems, have long conversations with AI tools about architecture decisions, and by morning I’ve forgotten half of what I learned. The solutions are buried in terminal history, scattered across conversation logs, and locked in my head where they’re approximately useless to anyone else.

Sound familiar?

Over the past year, I’ve built a system that captures all of that automatically. AI conversations get archived. Session notes get written to project vaults. Technical decisions get indexed into a RAG-searchable knowledge base. And when it’s time to write a blog post, I’m not starting from a blank page — I’m starting from months of accumulated context that an AI already understands.

This is the story of that system.


The Series at a Glance

PartWhat It CoversKey Theme
Part 1: The Content Pipeline (this page)How AI conversations become publishable contentCapture everything, curate later
Part 2: The Resource WatchdogPreventing OOM crashes from runaway AI processesSystems that protect themselves
Part 3: The AI Context System38 projects, 166K RAG-indexed chunks, tiered routingMaking AI remember across sessions
Part 4: Hooks & AutomationClaude Code hooks, git integration, automatic guardrailsLet the machines enforce the rules
Part 5: The Vault SystemObsidian + markdown + ai-context as a knowledge platformOrganizing 10,000+ files without losing your mind

The Problem: Knowledge Decay

Every developer has this experience. You spend 4 hours debugging a bizarre issue. You finally figure it out. You think “I should write this down.” You don’t. Six months later, you hit the same issue and spend another 4 hours.

For me, this was amplified by working across a lot of projects simultaneously. ArgoBox. Argo OS. Build Swarm. Colorado Legal RAG. EdgeMail. ArgoHarvest. Each one has its own codebase, its own deployment pipeline, its own set of hard-won knowledge about what works and what doesn’t.

The AI sessions made it worse, paradoxically. I was having incredibly productive conversations with Claude — solving complex problems, making architecture decisions, debugging obscure issues — but those conversations disappeared when the session ended. The output (committed code) survived, but the reasoning (why I made those choices) was gone.

Phase 1: Zero-Friction Capture

The first rule of documentation: if it requires effort, it won’t happen. I tried keeping a dev journal. I tried writing session notes after each coding session. I tried Notion, Logseq, paper notebooks. None of it stuck because there was always friction between “finishing the work” and “documenting the work.”

The solution was automation. Capture everything by default, organize later.

Conversation archiving — Every Claude session gets automatically backed up to /mnt/homes/galileo/argo/Vaults/conversation-archive/. Transcripts, timestamps, which model was used, what files were modified. This happens without me doing anything.

Terminal session logging — Key sessions get captured with timestamps. When I’m in a debugging flow, I can reconstruct what happened after the fact.

Git commit history — Every commit is a breadcrumb. The diff tells you what changed. Combined with the conversation archive, you can figure out why.

Session notes — At the end of a significant work session, I (or the AI) write session notes to the project vault. These are structured: what was done, what worked, what didn’t, what’s next.

The key insight: capture is cheap. Curation is expensive. So capture everything and spend your energy deciding what’s worth surfacing later.

Phase 2: The Obsidian Layer

All this captured content lives in Obsidian vaults — markdown files organized by project. Each vault has a consistent structure:

project-vault/
├── sessions/          # Daily work logs
│   └── 2026-03-08/
├── technical/         # Architecture docs
├── knowledge-base/    # Patterns, decisions, gotchas
├── procedures/        # Step-by-step runbooks
├── Blog/              # Draft blog content
└── journal/           # Freeform notes

Obsidian gives me graph visualization (which is pretty but honestly not that useful), full-text search (which is useful), and a familiar markdown editing experience. But the real value is the file system structure. Every file has a predictable path. Every project follows the same pattern. When I need to find something, I know where to look.

Phase 3: AI-Assisted Drafting

Here’s where it gets interesting. When I sit down to write a blog post, I don’t start from scratch. I start a conversation with Claude and say: “I want to write about [topic]. Here’s the relevant session notes, technical docs, and conversation history.”

The AI already has the context. It’s read the session notes. It knows what decisions were made and why. It knows what went wrong and how I fixed it. Writing becomes editing — shaping raw context into a narrative instead of trying to recall details from memory.

The voice scoring system (MyVoice Studio) makes sure the output sounds like me, not like a corporate blog. It scores text against my writing profile — sentence length, vocabulary patterns, first-person usage, the ratio of technical content to personal commentary. If a draft sounds too formal or too generic, the score drops and I know to rewrite it.

What This Looks Like in Practice

Here’s a real example. I spent a session debugging a Build Swarm dependency resolution issue. During that session:

  1. Automatically captured: Full conversation transcript, terminal output, git diffs
  2. Session notes written: What the bug was, how I found it, how I fixed it, what it taught me about topological sorting
  3. Knowledge base updated: New entry about Kahn’s algorithm edge cases in Portage dependency graphs
  4. Blog draft created: A rough outline for a “Lessons in Distributed Build Systems” post, pre-populated with code samples and debugging narrative

When I came back the next week to write the actual blog post, I had everything I needed. The post went from “idea” to “published” in about 2 hours instead of the usual 6-8.

That’s the pipeline. Capture → organize → draft → publish. Each step builds on the last. Nothing gets lost.


Next up: Part 2 — The Resource Watchdog — How a daemon prevents runaway AI processes from OOM-killing your workstation.