Blog Expansion Roadmap: Using Part 1 Journey + Q&A ○
Actionable plan for expanding thin blog posts using the Argo OS Part 1 journey document and ChatGPT Q&A answers
"Document Everything."
Dev Logs, Personal Ramblings, and the raw reality of the lab.
⚠️ Raw Output
$ pstree -p journal
journal │ ├── 2026 │ │ │ ├── 02 │ └── 01 ├── 2025 │ │ │ ├── 12 │ ├── 11 │ ├── 09 │ ├── 08 │ ├── 06 │ ├── 04 │ └── 03 ├── 2024 │ │ │ ├── 11 │ └── 05 └── 2023 │ ├── 12 ├── 09 └── 08
Actionable plan for expanding thin blog posts using the Argo OS Part 1 journey document and ChatGPT Q&A answers
When port 3002 has Grafana instead of Homepage, and nobody remembers the password
I built a fancy NOC dashboard with storage metrics and network stats. Then I noticed the numbers changed every time I refreshed. Math.random() had been running production monitoring for who knows how long.
I built an auto-healing build swarm. Then it SSH'd into my development machine and ran 'reboot'. The container reported the wrong IP and the orchestrator executed its cleanup protocol. On my workstation. At 8:30 PM.
A 7-hour debugging session uncovered three separate rsync bugs: missing timeouts, an invalid SSH flag, and uploading 3GB instead of one package. Also built a CLI tool because I was tired of SSHing everywhere.
I SSH'd into my drone to debug why it was offline. Got a Zorin OS login prompt instead of Gentoo. Spent 20 minutes troubleshooting the wrong machine because two devices had the same IP address.
The printer was on the same physical network. CUPS could see it via mDNS. But packets weren't going anywhere. Turns out a power surge knocked it back to a static IP from an old network config — 192.168.0.104 on a 10.0.0.x network.
Power surge aftermath turns into a deep dive through WiFi networks, mystery TP-Links, and subnet shenanigans to resurrect mom's printer
Migrated session log from session-20260128-Sirius-Station-setup-and-fixes.md
When the orchestrator tried to reboot a misbehaving build drone, it accidentally rebooted the gateway instead. NAT masquerade, Tailscale routing, and distributed systems debugging at its finest.
The build swarm gateway kept rebooting randomly. Four times in one day. No crashes, no errors — clean shutdowns. Turns out a drone on a different network was reporting the gateway's IP as its own, and the orchestrator was helpfully 'fixing' it.
System hard-locked during a coding session. The culprit: a language server using 41% CPU and 3.3GB RAM while 'idle', with active connections to Google's cloud.
Plasma frozen. Three plasmashell processes. Three weather widgets. One evening of debugging that ended with a better reset script.
My websites were flipping between working and broken at random. Same URL, same moment, different results. Turns out I had two cloudflared instances fighting over the same tunnel — and Cloudflare was helpfully load-balancing between them.
Drones were deleting their packages before the orchestrator could validate them. Also: a Docker container crash-looping because it was looking for SSH keys that don't exist in the new architecture.
Plugged in a Thunderbolt ethernet adapter for faster NAS transfers. Lost DNS. Lost SSH. Spent hours finding three different root causes, including a shell script syntax error and asymmetric routing through Tailscale.
I plugged in a Thunderbolt ethernet adapter for faster NAS transfers. DNS died, SSH became a ghost, and I learned why you should never trust hot-pluggable networking on Unraid.
When you lose your Claude context mid-cleanup and discover your Unraid server has 3.5TB of audiobooks nested three folders deep with 3,582 empty placeholder folders for good measure.
Server rebooting every 1-3 minutes. Couldn't stay up long enough to investigate. Turned out K3s pods were crash-looping so hard they destabilized the kernel, and Ubuntu's default panic setting auto-rebooted before I could catch it.
When Alpha-Centauri started rebooting every 90 seconds, I was convinced my build swarm code had achieved sentience and was trying to escape. Spoiler: it was innocent.
When someone clicks play on an audiobook and the server tries to transcode 2,272 hours of audio in one stream, you know something's wrong. Also discovered 3,584 empty folders and a deleted log file eating 112MB of RAM.
Four HGST drives. One dying Synology. USB docks that kept disconnecting. The week between Christmas and New Year's became a crash course in mdadm, LVM, Btrfs, and why you should never trust USB for data recovery.
Galaxy Buds cutting out every 30 seconds. Turns out Linux was using a Bluetooth mode that Samsung earbuds hate. One kernel parameter fixed it.
Traditional Gentoo VM deployment: 6-10 hours. My workflow: 5 minutes to a bootable system. The secret is Btrfs snapshots, binary packages, and accepting that BIOS boot and Btrfs don't mix.
Plex buffering every few seconds. Thought it was bandwidth. Thought it was transcoding. Turned out to be a one-second cache timeout buried in CIFS mount options. Changed three settings, streaming went from unwatchable to smooth.
Synology NAS RAID degraded. One drive failed. The replacement wouldn't integrate. 86 messages across two days to figure out why - and it wasn't the drive's fault.
Customizing Waybar for Hyprland. Modules, colors, spacing, hover effects - 244 messages to get a status bar that looks exactly right. Sometimes the details matter more than the function.
KDE Plasma crashed and wouldn't give me a terminal. After the reboot, I discovered why: half the services I needed weren't running because OpenRC doesn't auto-start things the way systemd does. Also, /run/ is empty after every reboot and nobody told me.
Gentoo wouldn't boot to GUI. KDE Plasma broken. SDDM wouldn't start. 322 messages, multiple recovery attempts, and the realization that Ctrl+Alt+F2 is the most important shortcut in Linux.
I spent hours trying to migrate a VM between hypervisors. Kernel panics, graphics corruption, UEFI nightmares. Then I ran fdisk and discovered 812GB of unallocated space on my main drive. Sometimes the solution isn't fixing the problem — it's finding a better problem.
Deleted a corrupted GRUB. Now /etc/grub.d/ was empty. os-prober couldn't see Windows or CachyOS. NVIDIA parameters were wrong. Found the working config in a backup file I didn't know existed.
LibreWolf through a VPN namespace. Worked perfectly — on the second launch. First try always failed. Turned out the fix that was supposed to help made everything worse.
88 reboots in 3 weeks. Every login was a coin flip. Turned out PCIe Gen4 and my aging motherboard were having a disagreement about timing. Fixed it, then immediately broke my right monitor.
Lost network connectivity. NAS mounts died. Network came back. Mounts didn't. Device busy, no such file, stale handles everywhere. Found duplicates in fstab and an ancient SMB version.
My daughter's phone was redirecting speedtest.net to bbump-me-push.com. Then to Etsy affiliate links. Antivirus found nothing. Play Protect found nothing. Turned out to be a game that modified the APN settings.
Installed Linux next to Windows. Now Windows thinks it's seven hours earlier. Every time. Turns out Windows and Linux disagree on what 'time' even means at the hardware level.
Dual-booted EndeavourOS next to Windows. Now my clock is wrong. Mountain Time, but off by an hour. Turns out Windows and Linux disagree about what time the hardware clock should store.
ASUS board with a 4790K. Wireless keyboard. Four monitors connected to a 4070 Ti. I was mashing F2 and Delete for ten minutes. Turns out I was probably getting into BIOS the whole time.
Work phone with MDM. Wanted to see what it was sending home. Set up a quarantine VLAN on the MikroTik, plugged in a WAP. Phone kept getting the wrong IP. Turned out I was connecting to the wrong SSID.
Astro build failing on Cloudflare Pages with 'panic: html: bad parser state: originalIM was set twice'. Spent an hour debugging SVG components. The real issue? Using 'latest' for dependencies.
Obsidian running in a K3s pod via XPRA. Works internally. 502 Bad Gateway externally. The container was alive, the process was running, but something between XPRA and Cloudflare wasn't speaking the same language.
cattle-system and cert-manager stuck in 'Terminating' for 15 days. Force deletes did nothing. JSON patches did nothing. Turns out you can't delete a namespace when the API server still thinks a stale custom resource exists.
DSM update. SMB reinstall. Now 9 services won't start. 'Storage abnormalities detected.' Even Storage Manager itself was broken. 116TB of data sitting there, accessible but unmanageable.
Fresh Proxmox install over an old one. 'Failed to start Import ZFS pool' on every boot. No pools listed. But there was a pool - it just wouldn't admit it.
Plex on one machine. Media on the NAS. Same network. But the library was empty. The files existed. The shares were mounted. Plex just... couldn't see them.
Installing tmux on a Synology NAS. Should be simple. Except DSM isn't standard Linux, and package managers don't exist. Enter Entware and 34 messages of troubleshooting.
Daily notes should exist whether I'm at the computer or not. A bash script, a cron job, and the obsidian:// URI scheme. Now the vault maintains itself.
52 messages to write one Dataview query. Pulling text from specific subheadings, across dated folders, displaying in chronological order. When the query finally worked, it felt like magic.
Four days. 463 messages. Docker, Traefik, Cloudflare, pfSense. Everything that could break did break. The network already exists. The image won't pull. The firewall blocks everything. Then around message 450, it all started working.
I spent 4 days and 463 ChatGPT messages trying to get Docker and Traefik working. Day 3 alone was 238 messages. But when that curl finally returned 200 OK at 11 PM on day 4, I may have scared the neighbors.
Work notes, personal journal, letters to my daughter, technical documentation - all in one Obsidian vault. Time to create structure without losing connections.
Setting up Obsidian journaling templates. 176 messages to get daily notes, templater, and dataview working together. The result: a second brain that actually thinks.
My employer wanted me to pentest a client from my home IP. Without a signed scope of work. This conversation might have saved my career.
126 messages to get VNC working on Debian. Residual configs from failed attempts, conflicting packages, systemd units that wouldn't die. Sometimes you have to burn it all down and start fresh.
Researching honeypot options for the home lab. Kippo, Cowrie, Dionaea, Honeyd - each one a different trap for a different kind of attacker. The question: which one catches the most interesting flies?
August 2023. I wanted to access my Synology from work. The question: VPN or expose it to the internet? The answer involved pfSense firewall rules, port restrictions, and learning why 'just forward port 445' is a terrible idea.
ArgoBox didn't start in 2023. It started around 2011 as a seedbox - ruTorrent, Plex, bare metal scripts. Then ESXi. Then distributed. Then unified. August 2023 was just when I started documenting the journey.