Skip to main content
36 Services Running
~61GB Estimated RAM
75% Would Choose Again
🌐

Networking

4 services

Traefik

Would pick again

Replaces: Nginx reverse proxy

Host Altair-Link (Docker)
Resources 2 CPU / 512MB RAM / 2GB disk
Uptime 99.9% — it just works. Boring but reliable.
Pain points

Label syntax will haunt your dreams. One wrong backtick and your service is invisible. I still reference my own blog post every time I add a new route.

The verdict

The automatic Let's Encrypt certs alone are worth the initial pain. Once it's running, you forget it exists.

Tailscale

Would pick again

Replaces: OpenVPN

Host Every node (system service)
Resources Negligible — ~50MB RAM per node
Uptime 99.99% — I genuinely cannot remember the last time it failed.
Pain points

Subnet routing took me an afternoon to wrap my head around. The ACL syntax is its own language. But honestly? Minor complaints.

The verdict

38ms to dad's house, like it's on the same LAN. This replaced 16 months of port forwarding hell with one command.

Pi-hole

Would pick again

Replaces: Router DNS

Host Izar-Host (Proxmox LXC)
Resources 1 CPU / 256MB RAM / 4GB disk
Uptime 99.5% — went down once when I forgot I was SSH'd into the wrong container and ran apt upgrade.
Pain points

The gravity database gets cranky after large list imports. And my daughter complains roughly once a month that 'the internet is broken' because Pi-hole blocked some Roblox telemetry domain.

The verdict

Blocking 30% of DNS queries before they leave the network is free performance. The dashboard is genuinely fun to watch.

Cloudflare Tunnel

Would pick again

Replaces: Port forwarding

Host Altair-Link (Docker)
Resources 1 CPU / 128MB RAM / minimal disk
Uptime 99.8% — Cloudflare's problem, not mine. That's the whole point.
Pain points

The WARP client on mobile sometimes forgets it exists. Zero Trust policies can be a maze if you go deep. But for basic tunneling, it's shockingly simple.

The verdict

No open ports on my firewall. No dynamic DNS hacks. My public services just work through the tunnel. Should have done this years ago.

🎬

Media

5 services

Plex

Maybe

Replaces: Netflix for personal media

Host Altair-Link (Docker)
Resources 4 CPU / 4GB RAM / 50GB disk (metadata)
Uptime 99.2% — Docker updates have killed it twice. My fault for using :latest like a maniac.
Pain points

Transcoding hammers the CPU when someone streams from outside the network. Plex Pass is basically mandatory. The 'free' tier is a lie.

The verdict

It works and the apps are polished. But the company keeps adding features nobody asked for while ignoring bugs. Jellyfin is getting closer every year.

Sonarr

Would pick again

Replaces: Manual TV downloads

Host Altair-Link (Docker)
Resources 1 CPU / 512MB RAM / 5GB disk
Uptime 99.7% — rock solid once you stop messing with quality profiles.
Pain points

Quality profiles are an art form. Spent a full Saturday dialing in the right combination of preferred words and cutoff scores. The v4 migration was... an experience.

The verdict

Set it and forget it. New episodes just appear. My daughter thinks I pay for every streaming service. I let her believe that.

Radarr

Would pick again

Replaces: Manual movie downloads

Host Altair-Link (Docker)
Resources 1 CPU / 512MB RAM / 5GB disk
Uptime 99.7% — same engine as Sonarr, same reliability.
Pain points

The custom format system is powerful but the learning curve is steep. I still don't fully understand why it sometimes grabs a 40GB remux when I asked for 1080p.

The verdict

Same reason as Sonarr. Automation that actually works. Add a movie, walk away, it shows up in Plex.

Prowlarr

Would pick again

Replaces: Jackett

Host Altair-Link (Docker)
Resources 1 CPU / 256MB RAM / 2GB disk
Uptime 99.6% — occasional indexer timeouts but that's not Prowlarr's fault.
Pain points

Honestly not much. It syncs indexers to Sonarr/Radarr and gets out of the way. The biggest pain was migrating from Jackett and re-adding all my indexers.

The verdict

Single pane of glass for all indexers. The Servarr team consolidating this was the right call. Jackett was fine but this is cleaner.

Jellyfin

Would pick again

Replaces: Nothing — runs alongside Plex as backup

Host Altair-Link (Docker)
Resources 2 CPU / 2GB RAM / 30GB disk (metadata)
Uptime 98.5% — it's my playground instance so I break it more often testing new versions.
Pain points

Client apps are hit or miss. The web UI is fine but the TV app on Roku is rough. Hardware transcoding setup took longer than Plex. But it's getting better fast.

The verdict

Free, open source, no corporate nonsense. It's not quite Plex-polished yet but I want it to win. Running both means I'm ready to switch when it is.

📊

Monitoring

4 services

Prometheus

Would pick again

Replaces: Nagios

Host Izar-Host (Proxmox LXC)
Resources 2 CPU / 2GB RAM / 40GB disk (TSDB)
Uptime 99.8% — went down once when the TSDB filled the disk. Classic.
Pain points

PromQL has a learning curve that's more like a learning cliff. Writing good alerting rules requires understanding both the query language and your own infrastructure patterns. And 30 days of retention at 15s scrape intervals eats disk like nobody's business.

The verdict

Industry standard for a reason. The ecosystem of exporters means I can monitor literally anything. Once you learn PromQL it's incredibly powerful.

Grafana

Would pick again

Replaces: Custom dashboards

Host Izar-Host (Proxmox LXC)
Resources 2 CPU / 1GB RAM / 5GB disk
Uptime 99.9% — the most reliable thing in my stack. Ironic for a visualization tool.
Pain points

Dashboard JSON sprawl is real. I have 15 dashboards and probably use 4 regularly. The temptation to add one more panel is constant. Alert fatigue is a you-problem, not a Grafana-problem, but Grafana makes it easy to create.

The verdict

Nothing else comes close for visualization. The dashboard is what I pull up when something feels wrong. It's my homelab's nervous system.

Uptime Kuma

Would pick again

Replaces: UptimeRobot free tier

Host Altair-Link (Docker)
Resources 1 CPU / 256MB RAM / 2GB disk
Uptime 99.9% — watching the watchers. It just works.
Pain points

The notification setup could be smoother. I spent way too long getting Discord webhooks formatted the way I wanted. And the status page customization is limited compared to something like Cachet.

The verdict

Self-hosted, beautiful UI, dead simple setup. The status page is what powers the footer LEDs on this very site. Replaced a paid service with something better.

Replaces: Email scripts

Host Izar-Host (Proxmox LXC)
Resources 1 CPU / 128MB RAM / 1GB disk
Uptime 99.8% — when this goes down, nobody gets alerted that it went down. Think about that.
Pain points

The routing tree config is YAML nesting hell. Silencing alerts during maintenance requires remembering a specific API call. And grouping/deduplication logic takes a few tries to get right.

The verdict

It pairs naturally with Prometheus. But the config format is painful and I've looked at Grafana Alerting as a potential replacement. Not enough motivation to switch yet.

💻

Development

4 services

Gitea

Maybe

Replaces: GitHub private repos

Host Izar-Host (Proxmox LXC)
Resources 2 CPU / 1GB RAM / 20GB disk
Uptime 99.7% — went down during a Proxmox migration and I didn't notice for two days. That tells you how often I push to it.
Pain points

The UI is fine but GitHub has spoiled me. CI integration is bolted on rather than native. Actions support exists now but it's still catching up. Mirror sync to GitHub occasionally gets confused.

The verdict

Does what I need for private repos and self-hosted Git. But Forgejo is gaining momentum and might be the better bet going forward. The Gitea governance drama was concerning.

ArgoCD

Would pick again

Replaces: Manual kubectl apply

Host Altair-Link (K3s cluster)
Resources 2 CPU / 1.5GB RAM / 10GB disk
Uptime 99.5% — the controller occasionally OOMs on large sync operations. Need to bump the memory limit.
Pain points

Resource hungry for what it does. The web UI is slick but slow on large application sets. Sync waves and hooks have a learning curve. And the RBAC model is its own doctoral thesis.

The verdict

GitOps is the way. Push to a repo, watch the cluster converge. No more 'did I apply that manifest?' anxiety. Worth every byte of RAM it consumes.

Replaces: GitHub Actions

Host Altair-Link (Docker)
Resources 2 CPU / 1GB RAM / 10GB disk
Uptime 98.5% — agents crash when they run out of disk space from cached Docker layers. I should automate that cleanup.
Pain points

Pipeline syntax is simpler than GitHub Actions but the plugin ecosystem is tiny by comparison. Some plugins just don't exist and you end up writing shell scripts. Multi-platform builds require more manual setup.

The verdict

Lightweight and gets the job done for my scale. But if I was starting over, I'd evaluate Forgejo Actions more seriously. The tight Gitea/Forgejo integration would simplify things.

Renovate Bot

Would pick again

Replaces: Manually checking for dependency updates

Host Capella-Outpost
Resources 1 CPU / 512MB RAM / 1GB disk
Uptime Runs on schedule — daily at 3 AM
Pain points

The initial config flood — it opens 50 PRs on day one. Automerge rules need careful tuning or you'll merge a breaking change at 3 AM. Dashboard is minimal.

The verdict

Every Docker image, npm package, and Helm chart version is tracked automatically. The PRs include changelogs. Saved me from running outdated software with known CVEs multiple times.

💾

Storage & Backup

3 services

Replaces: External drives in a shoebox

Host Cassiel-Silo (Synology DS920+)
Resources 4 CPU / 4GB RAM / 32TB raw (RAID 5)
Uptime 99.5% — went offline during a power outage that outlasted the UPS. Added a bigger UPS after that.
Pain points

DSM updates sometimes break Docker. Synology's container implementation lags behind vanilla Docker. And the fan noise at 3 AM when a scrub runs is a thing. The local unit (Rigel-Silo) died, so now everything depends on the remote unit.

The verdict

Synology makes NAS easy and the app ecosystem is decent. But I'd look harder at building a custom NAS with TrueNAS if I was starting fresh. More control, less vendor lock-in.

Restic

Would pick again

Replaces: rsync scripts held together with cron and hope

Host Multiple hosts (CLI tool)
Resources Varies — spikes to 2GB RAM during large backup operations
Uptime N/A — it's a scheduled job, not a service. But backup success rate is ~98%.
Pain points

The prune operation is SLOW on large repos. Like 'go make coffee and come back' slow. And if you forget to run prune, your backup repo grows forever. Also, the restore workflow isn't as intuitive as backing up.

The verdict

Encrypted, deduplicated, incremental backups to any backend. I push to both local disk and B2 cloud. Restore has saved me twice. That's all you need to know.

Replaces: local-path-provisioner

Host Altair-Link (K3s cluster)
Resources 2 CPU / 1GB RAM per node / varies disk
Uptime 97% — this one has caused some late nights. Replica rebuilds after node reboots can be slow and block pod scheduling.
Pain points

Heavy for a homelab. The overhead of running distributed storage on 2-3 nodes is significant. Replica sync can saturate the network. UI is nice but the system is complex under the hood. Upgrades require reading every changelog line carefully.

The verdict

Distributed storage in Kubernetes is genuinely hard. Longhorn makes it approachable but it's still overkill for my scale. OpenEBS or even NFS-backed PVs might have been simpler. But the snapshot and backup features are legitimately useful.

🔒

Security

4 services

Vaultwarden

Would pick again

Replaces: LastPass

Host Izar-Host (Proxmox LXC)
Resources 1 CPU / 256MB RAM / 2GB disk
Uptime 99.9% — this is the one service I cannot afford to lose. It's backed up to three locations.
Pain points

Honestly? Almost none. The admin panel is basic. Emergency access setup requires careful thought. And the nagging fear of 'what if I lose access to my password manager' keeps me backing it up obsessively.

The verdict

Runs on nothing, compatible with all Bitwarden clients, and I control my own data. After the LastPass breach I will never trust a hosted password manager again.

Authelia

Would pick again

Replaces: Basic auth on everything

Host Altair-Link (Docker)
Resources 1 CPU / 256MB RAM / 2GB disk
Uptime 99.7% — once had a session storage issue that locked me out of everything behind it. That was a fun Saturday.
Pain points

The initial config file is intimidating. TOTP/WebAuthn setup per user requires reading the docs carefully. And if Authelia goes down, everything behind it goes down. That's the nature of a central auth gateway but it still stings.

The verdict

SSO for my homelab with MFA. Traefik middleware integration is clean once configured. No more typing credentials into 15 different services. The security posture improvement alone justifies the complexity.

Fail2ban

Would pick again

Replaces: Hope

Host Multiple hosts (system service)
Resources Negligible — ~30MB RAM per instance
Uptime 99.9% — silent guardian. I only notice it when I check the ban list and see hundreds of blocked IPs.
Pain points

Writing custom jail configs for non-standard services is tedious regex work. The filter syntax hasn't aged well. And log rotation can cause fail2ban to lose track of state if you're not careful.

The verdict

The internet is hostile. Even with Cloudflare Tunnel handling public traffic, internal services still need protection. Watching the ban list is a sobering reminder of how many bots are scanning everything, all the time.

CrowdSec

Would pick again

Replaces: Fail2ban (partially)

Host Izar-Host + Altair-Link
Resources 0.5 CPU / 200MB RAM per node
Uptime 99.9%
Pain points

The console dashboard requires a free cloud account which feels ironic for a self-hosted tool. Bouncer configuration per service requires understanding the middleware chain. Community blocklists occasionally false-positive on VPN exit nodes.

The verdict

The crowd intelligence is the killer feature. My server blocks IPs that attacked someone else's server 5 minutes ago. That's not possible with Fail2ban.

🤖

AI/ML

4 services

Ollama

Would pick again

Replaces: Cloud API calls for local tasks

Host Capella-Outpost (Bare metal, RTX 4070 Ti)
Resources 8 CPU / 16GB RAM / 80GB disk (models)
Uptime 95% — goes down when I reboot for kernel updates, which is more often than I'd like on Gentoo.
Pain points

Model management is manual. VRAM is always the bottleneck — running a 13B model means nothing else gets the GPU. Inference speed is great on the 4070 Ti but context windows are still limited by RAM. And every new model release means redownloading gigabytes.

The verdict

Local inference with zero API costs. Private queries stay private. The model ecosystem is exploding and Ollama makes swapping models trivially easy. This is the future of personal computing.

Open WebUI

Would pick again

Replaces: Cloud chat UIs for private queries

Host Capella-Outpost (Docker)
Resources 2 CPU / 1GB RAM / 5GB disk
Uptime 95% — tied to Ollama's uptime since it's just a frontend.
Pain points

Updates come fast and occasionally break things. The RAG pipeline setup took a few attempts to get right. Document ingestion is still rough around the edges. But the pace of improvement is wild.

The verdict

A beautiful chat interface for local models. Multi-model switching, conversation history, document upload. It turned Ollama from a CLI tool into something my whole house can use.

ComfyUI

Would pick again

Replaces: Midjourney subscription

Host Izar-Host (GPU)
Resources GPU + 4 CPU / 8GB VRAM / 12GB RAM / 50GB disk (models)
Uptime On-demand — only runs when I need it
Pain points

Model management is a mess. Every workflow needs different checkpoints and you end up with 200GB of models. The node-based UI is powerful but the learning curve is steep. Custom node dependencies conflict constantly.

The verdict

Once you have a workflow dialed in, the output quality matches commercial services. No per-image costs. The community shares workflows freely.

LocalAI

Maybe

Replaces: Cloud API calls for embeddings and TTS

Host Izar-Host (GPU)
Resources 2 CPU / 4GB RAM / 10GB disk
Uptime On-demand
Pain points

Model compatibility is hit-or-miss. Some GGUF models just don't work. The API compatibility layer with the cloud API format is good but not perfect — some edge cases break.

The verdict

Useful for specific tasks (embeddings, TTS) where you don't want cloud dependency. For chat, Ollama is better. For embeddings specifically, this fills a niche.

🏠

Home Automation

3 services

Home Assistant

Would pick again

Replaces: SmartThings hub

Host Izar-Host
Resources 2 CPU / 1GB RAM / 5GB disk
Uptime 99.5% — goes down when I update and forget to check the breaking changes page
Pain points

Every update breaks at least one integration. The YAML-to-UI migration is half-done so you end up editing both. Zigbee devices randomly decide they don't want to be automated today.

The verdict

Nothing else comes close for local-first home automation. The community is massive and the integrations are unmatched.

Mosquitto MQTT

Would pick again

Replaces: Cloud IoT hubs

Host Izar-Host
Resources 0.1 CPU / 15MB RAM / 100MB disk
Uptime 99.99% — literally never goes down
Pain points

ACL files are fiddly. Debugging topic routing when a sensor stops publishing requires patience and `mosquitto_sub -v -t '#'`.

The verdict

The simplest, most reliable piece of infrastructure in the entire stack. Runs on nothing.

Zigbee2MQTT

Would pick again

Replaces: Proprietary Zigbee hubs (Hue, IKEA)

Host Izar-Host
Resources 0.5 CPU / 128MB RAM / 200MB disk
Uptime 99% — occasional coordinator firmware drama
Pain points

Coordinator firmware updates are scary. Device pairing sometimes requires the dance of 'hold button, pray, check logs'. Some devices have quirks that need device-specific converters.

The verdict

Local Zigbee without vendor lock-in. The device database has thousands of entries. Once paired, devices are rock solid.

📄

Documents & Productivity

3 services

Paperless-ngx

Would pick again

Replaces: Filing cabinet + Google Drive for documents

Host Capella-Outpost
Resources 2 CPU / 1.5GB RAM / 20GB disk
Uptime 99% — Tika OCR container occasionally OOMs on large PDFs
Pain points

Initial document import and tagging is a week-long project. The ML auto-tagging is good but not magic — you'll still manually tag 30% of documents. Full-text search is incredible once set up though.

The verdict

Finding any document in seconds by searching its contents is life-changing. Worth the setup effort.

Replaces: Confluence, Notion (team wiki)

Host Capella-Outpost
Resources 1 CPU / 512MB RAM / 2GB disk
Uptime 99.5%
Pain points

Editor is functional but not as slick as Notion. Export options are limited. Search works but isn't instant.

The verdict

It's fine for internal documentation. If I was starting fresh, I'd look harder at Outline for the better editor and API.

Stirling-PDF

Would pick again

Replaces: Random sketchy PDF tools online

Host Capella-Outpost
Resources 1 CPU / 512MB RAM / 1GB disk
Uptime 99.9%
Pain points

None, honestly. It does one thing and does it well. The OCR feature using Tesseract is surprisingly good.

The verdict

Never uploading a PDF to a random website again. Merge, split, compress, OCR — all local.

📨

Communication & Notifications

2 services

Ntfy

Would pick again

Replaces: Pushover, email alerts for everything

Host Altair-Link
Resources 0.1 CPU / 30MB RAM / 500MB disk
Uptime 99.9%
Pain points

The Android app battery optimization dance. UnifiedPush setup with other apps requires reading docs carefully. No built-in dashboard for viewing notification history.

The verdict

curl-based push notifications to my phone. Every script, cron job, and alert now ends with `curl -d 'done' ntfy.example.com/alerts`. Replaced 4 different notification services.

Replaces: Discord (for private homelab chat)

Host Capella-Outpost
Resources 2 CPU / 1GB RAM / 10GB disk
Uptime 95% — Synapse is hungry and I've had to restart it more than I'd like
Pain points

Synapse (Python) is slow and RAM-hungry. Federation is cool in theory but adds complexity. The Element web client works but mobile push notifications are unreliable without a push gateway.

The verdict

The idea of self-hosted encrypted chat is great. The reality is that Synapse needs babysitting. I'd look at Conduit (Rust) if starting fresh.