The Logs

"Document Everything."
Dev Logs, Personal Ramblings, and the raw reality of the lab. ⚠️ Raw Output

journal_tree.exe
$ pstree -p journal
journal ├── 2026 ├── 02 └── 01 ├── 2025 ├── 12 ├── 11 ├── 09 ├── 08 ├── 06 ├── 04 └── 03 ├── 2024 ├── 11 └── 05 └── 2023 ├── 12 ├── 09 └── 08

The Dashboard That Lied To Me

I built a fancy NOC dashboard with storage metrics and network stats. Then I noticed the numbers changed every time I refreshed. Math.random() had been running production monitoring for who knows how long.

The Orchestrator That Rebooted My Workstation

I built an auto-healing build swarm. Then it SSH'd into my development machine and ran 'reboot'. The container reported the wrong IP and the orchestrator executed its cleanup protocol. On my workstation. At 8:30 PM.

Three Rsync Bugs In One Day

A 7-hour debugging session uncovered three separate rsync bugs: missing timeouts, an invalid SSH flag, and uploading 3GB instead of one package. Also built a CLI tool because I was tired of SSHing everywhere.

The Drone That Wasn't A Drone

I SSH'd into my drone to debug why it was offline. Got a Zorin OS login prompt instead of Gentoo. Spent 20 minutes troubleshooting the wrong machine because two devices had the same IP address.

The Printer That Forgot Its Subnet

The printer was on the same physical network. CUPS could see it via mDNS. But packets weren't going anywhere. Turns out a power surge knocked it back to a static IP from an old network config — 192.168.0.104 on a 10.0.0.x network.

The Drone That Rebooted the Wrong Server

When the orchestrator tried to reboot a misbehaving build drone, it accidentally rebooted the gateway instead. NAT masquerade, Tailscale routing, and distributed systems debugging at its finest.

The Gateway That Rebooted Itself

The build swarm gateway kept rebooting randomly. Four times in one day. No crashes, no errors — clean shutdowns. Turns out a drone on a different network was reporting the gateway's IP as its own, and the orchestrator was helpfully 'fixing' it.

The Language Server That Froze Everything

System hard-locked during a coding session. The culprit: a language server using 41% CPU and 3.3GB RAM while 'idle', with active connections to Google's cloud.

The Taskbar That Stopped Responding

Plasma frozen. Three plasmashell processes. Three weather widgets. One evening of debugging that ended with a better reset script.

The Tunnel With Two Heads

My websites were flipping between working and broken at random. Same URL, same moment, different results. Turns out I had two cloudflared instances fighting over the same tunnel — and Cloudflare was helpfully load-balancing between them.

The Race Condition That Ate My Binaries

Drones were deleting their packages before the orchestrator could validate them. Also: a Docker container crash-looping because it was looking for SSH keys that don't exist in the new architecture.

The Thunderbolt That Killed DNS

Plugged in a Thunderbolt ethernet adapter for faster NAS transfers. Lost DNS. Lost SSH. Spent hours finding three different root causes, including a shell script syntax error and asymmetric routing through Tailscale.

The AudioBooks Folder That Ate Itself Three Times

When you lose your Claude context mid-cleanup and discover your Unraid server has 3.5TB of audiobooks nested three folders deep with 3,582 empty placeholder folders for good measure.

The Kernel That Panicked Every Three Minutes

Server rebooting every 1-3 minutes. Couldn't stay up long enough to investigate. Turned out K3s pods were crash-looping so hard they destabilized the kernel, and Ubuntu's default panic setting auto-rebooted before I could catch it.

The Reboot Loop That Blamed the Wrong Code

When Alpha-Centauri started rebooting every 90 seconds, I was convinced my build swarm code had achieved sentience and was trying to escape. Spoiler: it was innocent.

The 1.36 Million Segment Stream

When someone clicks play on an audiobook and the server tries to transcode 2,272 hours of audio in one stream, you know something's wrong. Also discovered 3,584 empty folders and a deleted log file eating 112MB of RAM.

The RAID That Almost Ate Christmas

Four HGST drives. One dying Synology. USB docks that kept disconnecting. The week between Christmas and New Year's became a crash course in mdadm, LVM, Btrfs, and why you should never trust USB for data recovery.

The Bluetooth Stutter That Wasn't My Fault

Galaxy Buds cutting out every 30 seconds. Turns out Linux was using a Bluetooth mode that Samsung earbuds hate. One kernel parameter fixed it.

From 40 Minutes to 5

Traditional Gentoo VM deployment: 6-10 hours. My workflow: 5 minutes to a bootable system. The secret is Btrfs snapshots, binary packages, and accepting that BIOS boot and Btrfs don't mix.

The Plex Stutter That Ruined Movie Night

Plex buffering every few seconds. Thought it was bandwidth. Thought it was transcoding. Turned out to be a one-second cache timeout buried in CIFS mount options. Changed three settings, streaming went from unwatchable to smooth.

The RAID That Refused to Rebuild

Synology NAS RAID degraded. One drive failed. The replacement wouldn't integrate. 86 messages across two days to figure out why - and it wasn't the drive's fault.

The 244-Message Waybar

Customizing Waybar for Hyprland. Modules, colors, spacing, hover effects - 244 messages to get a status bar that looks exactly right. Sometimes the details matter more than the function.

The Day I Learned OpenRC Isn't Systemd

KDE Plasma crashed and wouldn't give me a terminal. After the reboot, I discovered why: half the services I needed weren't running because OpenRC doesn't auto-start things the way systemd does. Also, /run/ is empty after every reboot and nobody told me.

The TTY That Saved Everything

Gentoo wouldn't boot to GUI. KDE Plasma broken. SDDM wouldn't start. 322 messages, multiple recovery attempts, and the realization that Ctrl+Alt+F2 is the most important shortcut in Linux.

The 812GB Hiding in Plain Sight

I spent hours trying to migrate a VM between hypervisors. Kernel panics, graphics corruption, UEFI nightmares. Then I ran fdisk and discovered 812GB of unallocated space on my main drive. Sometimes the solution isn't fixing the problem — it's finding a better problem.

The GRUB That Forgot Everything

Deleted a corrupted GRUB. Now /etc/grub.d/ was empty. os-prober couldn't see Windows or CachyOS. NVIDIA parameters were wrong. Found the working config in a backup file I didn't know existed.

The VPN That Only Worked the Second Time

LibreWolf through a VPN namespace. Worked perfectly — on the second launch. First try always failed. Turned out the fix that was supposed to help made everything worse.

The 88 Reboots Mystery

88 reboots in 3 weeks. Every login was a coin flip. Turned out PCIe Gen4 and my aging motherboard were having a disagreement about timing. Fixed it, then immediately broke my right monitor.

The Mounts That Wouldn't Come Back

Lost network connectivity. NAS mounts died. Network came back. Mounts didn't. Device busy, no such file, stale handles everywhere. Found duplicates in fstab and an ancient SMB version.

The Phone That Kept Redirecting

My daughter's phone was redirecting speedtest.net to bbump-me-push.com. Then to Etsy affiliate links. Antivirus found nothing. Play Protect found nothing. Turned out to be a game that modified the APN settings.

The Clock That Forgot Its Timezone

Installed Linux next to Windows. Now Windows thinks it's seven hours earlier. Every time. Turns out Windows and Linux disagree on what 'time' even means at the hardware level.

The Hour That Kept Shifting

Dual-booted EndeavourOS next to Windows. Now my clock is wrong. Mountain Time, but off by an hour. Turns out Windows and Linux disagree about what time the hardware clock should store.

The BIOS That Wouldn't Show

ASUS board with a 4790K. Wireless keyboard. Four monitors connected to a 4070 Ti. I was mashing F2 and Delete for ten minutes. Turns out I was probably getting into BIOS the whole time.

The VLAN for the Surveillance Phone

Work phone with MDM. Wanted to see what it was sending home. Set up a quarantine VLAN on the MikroTik, plugged in a WAP. Phone kept getting the wrong IP. Turned out I was connecting to the wrong SSID.

The Build That Panicked

Astro build failing on Cloudflare Pages with 'panic: html: bad parser state: originalIM was set twice'. Spent an hour debugging SVG components. The real issue? Using 'latest' for dependencies.

The Obsidian Container That Wouldn't Connect

Obsidian running in a K3s pod via XPRA. Works internally. 502 Bad Gateway externally. The container was alive, the process was running, but something between XPRA and Cloudflare wasn't speaking the same language.

The Namespace That Wouldn't Die

cattle-system and cert-manager stuck in 'Terminating' for 15 days. Force deletes did nothing. JSON patches did nothing. Turns out you can't delete a namespace when the API server still thinks a stale custom resource exists.

The Update That Broke Storage Manager

DSM update. SMB reinstall. Now 9 services won't start. 'Storage abnormalities detected.' Even Storage Manager itself was broken. 116TB of data sitting there, accessible but unmanageable.

The Pool That Refused to Import

Fresh Proxmox install over an old one. 'Failed to start Import ZFS pool' on every boot. No pools listed. But there was a pool - it just wouldn't admit it.

The Plex That Couldn't See

Plex on one machine. Media on the NAS. Same network. But the library was empty. The files existed. The shares were mounted. Plex just... couldn't see them.

The 34-Message tmux Install

Installing tmux on a Synology NAS. Should be simple. Except DSM isn't standard Linux, and package managers don't exist. Enter Entware and 34 messages of troubleshooting.

The Vault That Opens Itself

Daily notes should exist whether I'm at the computer or not. A bash script, a cron job, and the obsidian:// URI scheme. Now the vault maintains itself.

The Dataview That Pulled Everything

52 messages to write one Dataview query. Pulling text from specific subheadings, across dated folders, displaying in chronological order. When the query finally worked, it felt like magic.

The 463-Message Traefik Saga

Four days. 463 messages. Docker, Traefik, Cloudflare, pfSense. Everything that could break did break. The network already exists. The image won't pull. The firewall blocks everything. Then around message 450, it all started working.

The 463-Message Saga

I spent 4 days and 463 ChatGPT messages trying to get Docker and Traefik working. Day 3 alone was 238 messages. But when that curl finally returned 200 OK at 11 PM on day 4, I may have scared the neighbors.

The Vault That Needed Boundaries

Work notes, personal journal, letters to my daughter, technical documentation - all in one Obsidian vault. Time to create structure without losing connections.

The 176-Message Obsidian Setup

Setting up Obsidian journaling templates. 176 messages to get daily notes, templater, and dataview working together. The result: a second brain that actually thinks.

The Scope That Could Save You

My employer wanted me to pentest a client from my home IP. Without a signed scope of work. This conversation might have saved my career.

The VNC That Wouldn't Connect

126 messages to get VNC working on Debian. Residual configs from failed attempts, conflicting packages, systemd units that wouldn't die. Sometimes you have to burn it all down and start fresh.

The Honeypots That Lie in Wait

Researching honeypot options for the home lab. Kippo, Cowrie, Dionaea, Honeyd - each one a different trap for a different kind of attacker. The question: which one catches the most interesting flies?

The NAS That Needed a Fence

August 2023. I wanted to access my Synology from work. The question: VPN or expose it to the internet? The answer involved pfSense firewall rules, port restrictions, and learning why 'just forward port 445' is a terrible idea.

The Lab That Started It All

ArgoBox didn't start in 2023. It started around 2011 as a seedbox - ruTorrent, Plex, bare metal scripts. Then ESXi. Then distributed. Then unified. August 2023 was just when I started documenting the journey.