Comparisons & Decision Tables
Side-by-side comparisons for homelab tools. No affiliate links, no sponsored picks -- just trade-offs laid out so you can make the call yourself.
Reverse Proxy Showdown
networkingWhich reverse proxy fits your homelab?
Traefik for Docker-heavy setups, Caddy for simplicity with automatic HTTPS, NPM if you want a GUI, HAProxy when raw TCP/UDP performance matters.
| Feature | Traefik | Caddy | NPM | HAProxy |
|---|---|---|---|---|
| Auto HTTPS | ACME built-in, DNS challenge support | Automatic by default -- zero config | GUI-based Let's Encrypt | Manual setup via haproxy-acme.sh or external tools |
| Docker Discovery | Native via container labels | Plugin (caddy-docker-proxy) | Built-in Docker socket listener | None -- static config or consul/etc template |
| Config Style | Labels + YAML/TOML static config | Caddyfile (human-readable) or JSON API | Web GUI with SQLite backend | Single flat config file, custom syntax |
| Performance (RPS) | ~30k RPS, fine for homelab scale | ~35k RPS, Go-based with HTTP/3 | ~20k RPS, Node.js overhead | ~60k+ RPS, purpose-built for load balancing |
| Learning Curve | Moderate -- label syntax is fiddly at first | Low -- Caddyfile is nearly self-documenting | Very low -- point and click | High -- powerful but config is dense |
| Middleware/Plugins | Rich built-in middleware (rate limit, auth, headers) | Modular plugins, build with xcaddy | Basic -- headers, SSL, access lists | Advanced ACLs, stick tables, Lua scripting |
| Dashboard / Monitoring | Built-in dashboard, Prometheus metrics | Admin API, no built-in dashboard | Built-in web UI with status indicators | Stats page, Prometheus exporter via haproxy_exporter |
| WebSocket Support | Native -- auto-detected through entrypoints | Native -- transparent proxying with no extra config | Native -- built-in WebSocket support via GUI toggle | Native -- requires `option http-server-name` and timeout tuning for long-lived connections |
| gRPC Support | Native with h2c backend support, gRPC-web middleware available | Native HTTP/2 and gRPC proxying, automatic h2c to backends | Not supported -- Node.js proxy layer doesn't handle gRPC natively | Full gRPC support via HTTP/2 backends, requires `proto h2` directive |
| Rate Limiting | Built-in rateLimit middleware with configurable average/burst per source IP | Built-in rate_limit directive, sliding window per client | Not built-in -- relies on upstream or Cloudflare for rate limiting | Stick tables with `sc_http_req_rate` -- extremely granular, per-URL, per-IP, per-header |
| IP Whitelisting | ipAllowList middleware, CIDR ranges like 10.42.0.0/24 | remote_ip matcher in Caddyfile, supports CIDR notation | Access list feature in GUI, per-proxy-host IP restrictions | ACLs with `src` keyword, CIDR and individual IPs, map files for large lists |
| Custom Error Pages | errors middleware with custom HTML per status code | handle_errors directive with per-code custom responses | Built-in custom error page support per proxy host via GUI | errorfile directive per backend, supports per-status-code HTML files |
| Best For | Dynamic container environments with frequent deploys | Static sites, small stacks, Wireguard tunnels | Non-technical users, quick setup, visual management | High-traffic TCP/HTTP load balancing, multi-backend failover |
Storage Filesystem Matrix
storageChoosing a filesystem for your NAS or storage server.
ZFS if you have ECC RAM and matching drives. MergerFS+SnapRAID for mixed/shucked drives on a budget. Btrfs if you want snapshots without the RAM overhead and run a supported RAID level (1, 10). bcachefs if you want ZFS-grade features on a single-disk or small-pool setup without the RAM tax.
| Feature | ZFS | Btrfs | MergerFS + SnapRAID | bcachefs |
|---|---|---|---|---|
| Data Integrity | Checksums on all data + metadata, self-healing with redundancy | Checksums on metadata, optional data checksums, self-healing with RAID1/10 | SnapRAID parity checks on schedule, no real-time checksumming | Full data + metadata checksums, self-healing with replication, per-inode checksum granularity |
| RAM Requirements | 1 GB per TB of storage is the rule of thumb, ARC cache is hungry | Minimal -- works fine with 2-4 GB total | Negligible -- both are userspace tools, no kernel memory pressure | Low -- no ARC equivalent, uses kernel page cache like ext4/btrfs |
| Mixed Drive Sizes | Possible but wasteful -- vdevs should match, smallest drive in mirror wins | Flexible -- can mix sizes in RAID1, not recommended for RAID5/6 | Built for this -- each drive is independent, pool is the union of all drives | Supports heterogeneous devices in a single filesystem, tiered storage (SSD + HDD) built-in |
| Expandability | Can add new vdevs but cannot expand existing vdevs (no single-drive add to raidz) | Can add devices to existing arrays, online resize supported | Add a drive any time, run snapraid sync -- done in minutes | Online device add and remove, grow and shrink supported |
| Snapshots | Near-instant, COW-based, send/recv for replication | COW snapshots, send/receive supported, subvolume-based | SnapRAID snapshots are parity-based, not instant rollback | COW snapshots with reflink support, snapshot-based send/receive |
| Scrubbing | zpool scrub -- verifies every block against checksums | btrfs scrub -- checks checksums, auto-repairs with redundancy | snapraid scrub -- verifies parity data against file checksums | bcachefs data scrub -- verifies checksums, repairs from replicas if available |
| COW Overhead | Fragmentation on random writes, recordsize tuning helps, zvols mitigate for VMs | Fragmentation is notorious on Btrfs, autodefrag mount option helps but doesn't eliminate | No COW -- underlying ext4/xfs writes in place, zero fragmentation concern | Lower fragmentation than Btrfs due to improved allocator, but still COW-inherent on random writes |
| Native Encryption | Yes -- dataset-level encryption (aes-256-gcm), key per dataset, raw send preserves encryption | No native encryption -- use dm-crypt/LUKS underneath, adds a layer | No -- use LUKS on underlying drives, SnapRAID operates on cleartext | Yes -- per-file and per-directory encryption built into the filesystem, ChaCha20/Poly1305 or AES-256 |
| Write Performance | Excellent with proper SLOG/ZIL, sync writes can bottleneck without | Good general performance, RAID5/6 has known write-hole issues | Native filesystem speed (ext4/xfs), SnapRAID is offline so no write penalty | Strong write performance with copygc and tiered writeback, SSD journal tier accelerates HDD pools |
| Gotchas | No shrinking pools, dedup eats RAM (don't enable it), license is CDDL (not GPL) | RAID5/6 is still marked unstable -- use RAID1/10 only in production | No real-time protection -- data written between syncs is unprotected, manual cron setup | Mainlined in kernel 6.7 but still marked experimental for multi-device RAID, single-device is stable for daily use |
| Recommended Use Case | Bulk NAS storage at 10.42.0.x serving Proxmox VMs, Immich photo libraries, media collections where bit-rot protection is non-negotiable | Root filesystem for Linux desktops/servers, Synology-style snapshots, Timeshift system rollbacks on a homelab workstation | Shucked-drive NAS on Unraid or standalone Debian box, media libraries where drives get swapped out regularly and parity rebuilds are tolerable | Single-disk root filesystem with snapshots, SSD+HDD tiered pool for a compact homelab server, or as a ZFS alternative when RAM is limited |
K8s Distribution Picker
kubernetesLightweight Kubernetes distributions for homelab and edge.
K3s is the default pick for homelabs -- lightweight, well-documented, huge community. Talos for immutable infrastructure purists. MicroK8s if you want snap-based simplicity.
| Feature | K3s | MicroK8s | K0s | Talos |
|---|---|---|---|---|
| Resource Footprint | ~512 MB RAM minimum, single 70 MB binary | ~540 MB RAM, runs as a snap package | ~512 MB RAM, single binary, no host deps | ~512 MB RAM, entire OS is the cluster -- nothing else runs |
| HA Support | Embedded etcd or external DB (MySQL, Postgres, SQLite) | 3-node HA with dqlite (distributed SQLite) | Embedded etcd or external etcd | Built-in etcd-based HA, designed for multi-node from day one |
| Ease of Install | One curl command, runs in 30 seconds | snap install microk8s, enable addons via microk8s enable | Single binary, k0sctl for multi-node automation | Write Talos image to disk, configure via API -- no SSH, no shell |
| Default CNI | Flannel (can swap to Calico, Cilium) | Calico (via addon), can use Flannel/Cilium | Kube-router (can swap to Calico, Cilium) | Flannel by default, Cilium supported |
| Built-in Components | Traefik ingress, ServiceLB, local-path storage, Helm controller | Addons system: dns, storage, ingress, metallb, gpu, istio | Minimal -- bring your own everything | Minimal -- designed to be declarative, no bundled extras |
| Upgrade Path | In-place binary swap or system upgrade controller | snap refresh microk8s --channel=1.29 | k0s update via k0sctl, rolling upgrades supported | API-driven rolling upgrades, no SSH needed, fully automated |
| ARM Support | First-class ARM64/ARMv7 support, popular on Raspberry Pi | ARM64 supported, ARMv7 limited | ARM64 supported | ARM64 supported, SBC images available |
| Best For | Homelab, edge, Raspberry Pi clusters, CI/CD environments | Developers who want quick local K8s, Ubuntu-centric shops | Air-gapped environments, minimal-dependency deployments | Production-grade immutable infrastructure, GitOps-native setups |
Backup Tool Shootout
storageDeduplicated backup tools for homelab data protection.
Restic for broad cloud backend support and simplicity. Borg for maximum compression and proven track record. Kopia if you want a modern UI with snapshots. Duplicati if non-technical users need to manage backups.
| Feature | Restic | Borg | Kopia | Duplicati |
|---|---|---|---|---|
| Cloud Backends | S3, B2, Azure, GCS, SFTP, rclone (30+ providers) | SFTP, sshfs -- no native cloud (use rclone mount as workaround) | S3, B2, Azure, GCS, SFTP, rclone, local, WebDAV | S3, B2, Azure, GCS, FTP, SFTP, WebDAV, OneDrive, Google Drive |
| Deduplication | Content-defined chunking, global dedup per repo | Chunk-based, global dedup, variable-length chunks | Content-defined chunking, global dedup, configurable chunk size | Block-level dedup with rolling hash |
| Encryption | AES-256 in CTR mode + Poly1305 MAC, always on | AES-256-CTR + HMAC-SHA256, optional but recommended | AES-256-GCM or ChaCha20-Poly1305, configurable | AES-256, always on, password-based key derivation |
| Speed | Fast backup and restore, parallel chunk processing | Fastest compression, slightly slower on initial backup due to indexing | Very fast -- Go-based, parallel uploads, aggressive caching | Slower -- C#/.NET overhead, especially on large changesets |
| GUI | CLI only (Backrest or resticprofile for web UI wrappers) | CLI only (Vorta for desktop GUI wrapper) | Built-in web UI on localhost:51515, plus CLI | Full web GUI built-in, runs as a system service |
| Large Repo Handling | Handles multi-TB repos well, prune can be slow on very large repos | Excellent -- compaction and check are well-optimized | Good -- parallel maintenance operations, snapshot pinning | Struggles above 1-2 TB -- database bloat, slow verification |
| Restore Granularity | Mount repo as FUSE filesystem, restore individual files or full snapshots | FUSE mount or extract, file-level granularity | FUSE mount, file-level restore, snapshot browsing via UI | Web UI restore picker, file-level, download as zip |
| Concurrent Backups | No built-in locking -- run one instance per repo, use wrapper scripts to queue | Repo-level locking prevents concurrent access, one backup at a time per repo | Built-in support for concurrent snapshots to the same repo, lock-free architecture | One backup per destination at a time, queue additional jobs in the scheduler |
| Bandwidth Limiting | --limit-upload and --limit-download flags, per-connection throttle in KiB/s | No built-in bandwidth limiting -- use trickle or tc on the network interface | --max-upload-speed and --max-download-speed flags, per-session throttle | Built-in bandwidth throttle in web UI and CLI, per-operation limit |
| Exclude Patterns | --exclude with glob patterns, --exclude-file for lists, --iexclude for case-insensitive | --exclude with fnmatch patterns, --exclude-from for file-based lists | --add-ignore with gitignore-style patterns, .kopiaignore file support per directory | Filter groups in GUI or CLI, regex and glob, per-backup-set exclude lists |
| S3-Compatible Backends | Native S3 support (AWS, MinIO, Wasabi, Backblaze B2 via S3 API), rclone for everything else | No native S3 -- mount via rclone FUSE or use borg serve over SSH to a cloud VM | Native S3, B2, GCS, Azure, and Wasabi support, also rclone backend for anything else | Native S3, B2, Azure, GCS, WebDAV, built-in for all major providers |
| Snapshot Browsing | restic mount exposes snapshots as FUSE filesystem, browse with any file manager | borg mount exposes archives as FUSE, borg list for CLI browsing | Built-in web UI at localhost:51515 with snapshot browser, file-level restore, diff between snapshots -- the GUI is a genuine feature, not an afterthought | Web UI with point-and-click restore, file browser per backup version, download as zip |
| Scheduling | External (cron, systemd timers), or use wrappers like autorestic | External scheduling only (cron, systemd) | Built-in scheduling via CLI or web UI policies | Built-in scheduler with retention policies and email notifications |
DNS / Ad-Block Solutions
networkingNetwork-wide DNS filtering and ad blocking.
AdGuard Home for encrypted DNS and a cleaner UI. Pi-hole for the largest community and ecosystem. Technitium if you need a full authoritative DNS server alongside blocking.
| Feature | Pi-hole | AdGuard Home | Technitium |
|---|---|---|---|
| Web UI | Dashboard-focused, query log, per-client stats, group management | Modern UI, per-client settings, dark mode, built-in query log | Full DNS server UI -- zones, records, DNSSEC, plus blocking dashboard |
| Encrypted DNS | Not built-in -- needs cloudflared or unbound sidecar for DoH/DoT upstream | Native DoH, DoT, DoQ, DNSCrypt -- both as client and server | Native DoH, DoT -- serves encrypted DNS directly to clients |
| Blocklist Management | Gravity-based, supports multiple lists, regex filtering, group assignment | Built-in lists, custom filtering rules with adblock syntax, per-client overrides | Built-in block lists, custom zones, regex, supports ABP filter syntax |
| DHCP Server | Built-in DHCP server as alternative to router DHCP | Built-in DHCP server with static leases | No built-in DHCP -- DNS only |
| Resource Usage | ~80 MB RAM, runs on Pi Zero W, SQLite-backed FTL engine | ~50 MB RAM, single Go binary, no database dependency | ~120 MB RAM, .NET-based, heavier but full-featured |
| API / Automation | Full REST API, teleporter for backup/restore, Ansible roles available | REST API, YAML config file, easy to containerize and replicate | REST API, DNS zone import/export, config backup/restore |
| Upstream Resolution | Unbound for recursive, or forward to any upstream (Cloudflare, Quad9) | Built-in upstream options, DNS rewrites, parallel queries to multiple upstreams | Built-in recursive resolver (no forwarding needed), conditional forwarding |
| Multi-Instance Sync | Gravity Sync or Orbital Sync for multi-node setups | Native config sync not built-in -- use file sync or container orchestration | Zone transfer (AXFR/IXFR) between instances -- standard DNS replication |
Hypervisor Comparison
infrastructureVirtualization platforms for homelab and small business.
Proxmox for most homelabs -- free, full-featured, active community. Unraid if you primarily want NAS + some VMs. XCP-ng for a fully open-source vSphere alternative.
| Feature | Proxmox VE | ESXi (Free) | XCP-ng | Unraid |
|---|---|---|---|---|
| Licensing | Free and open source (AGPL), optional paid support subscription | Free tier discontinued for new installs as of 2024, requires vSphere licensing | Fully open source (GPLv2), optional paid support from Vates | Paid license required ($59 Basic / $89 Plus / $129 Pro), no free tier |
| Clustering / HA | Built-in clustering with Corosync, live migration, HA with fencing | No clustering in free tier -- requires vCenter ($$) | Pool-based clustering, live migration, HA with built-in tooling | No clustering -- single node only, no live migration |
| Container Support | LXC containers as first-class citizens alongside VMs | No native container support -- VMs only | No native container support -- VMs only | Docker via Community Apps plugin, Dockerman built-in |
| GPU Passthrough | Full IOMMU passthrough, mediated (vGPU) with supported cards | Full passthrough, vGPU with NVIDIA GRID/vGPU licensed drivers | GPU passthrough supported, less community documentation | NVIDIA and AMD passthrough via VFIO, well-documented for Plex/Jellyfin |
| Storage Backends | ZFS, LVM, LVM-thin, Ceph, NFS, iSCSI, GlusterFS | VMFS, NFS, vSAN (licensed), iSCSI | Local LVM, NFS, iSCSI, GlusterFS, XOSAN | XFS for cache, Btrfs/XFS for array, ZFS plugin available, unRAID parity |
| Backup / Restore | Proxmox Backup Server (free, dedup, incremental), vzdump built-in | vSphere Data Protection or third-party (Veeam, Nakivo), ghettoVCB script | Xen Orchestra backup with delta/incremental, continuous replication | Community Apps: Appdata Backup, CA Backup plugin, manual rsync |
| Web UI | Full management UI, console, task viewer, resource graphs | DCUI (console) + vSphere Client (web) -- limited in free tier | Xen Orchestra Lite (free) or full XOA (built from source or paid appliance) | Dashboard with Docker, VMs, shares, array management all in one |
| Community / Ecosystem | Massive community, forums, subreddit, extensive wiki | Enterprise-focused, declining homelab relevance since free tier removal | Growing community, Vates-backed development, XCP-ng forums | Large community, active forums, huge Community Apps ecosystem |
Container Runtime Comparison
kubernetesContainer runtimes for development and production.
Docker for development and standalone homelab stacks. Podman for rootless security and systemd integration. containerd if you only need a K8s runtime. CRI-O for minimal, K8s-only clusters.
| Feature | Docker | Podman | containerd | CRI-O |
|---|---|---|---|---|
| Architecture | Client-server daemon (dockerd), always running | Daemonless, fork-exec model, no persistent process | Daemon-based but minimal, gRPC API, used as K8s backend | Daemon-based, implements CRI spec directly, OCI-compliant |
| Rootless Mode | Supported since 20.10 but requires additional setup (slirp4netns, uidmap) | Rootless by default, first-class support, subuid/subgid mapping built-in | Rootless supported via rootlesskit, less documented | Rootless supported but not a primary use case |
| Compose / Stacks | Docker Compose (V2 built-in as plugin), Swarm mode for clustering | podman-compose or podman play kube for K8s YAML, Quadlet for systemd | No compose equivalent -- designed as a runtime, not a user tool | No compose -- purely a Kubernetes CRI implementation |
| K8s Integration | Deprecated as K8s runtime since 1.24, still works via cri-dockerd shim | Generate K8s YAML from running pods, play kube for local testing | Default runtime for most K8s distributions (K3s, GKE, EKS, AKS) | Default runtime for OpenShift, optimized for K8s and nothing else |
| Image Building | docker build with BuildKit backend, multi-stage, layer caching | Buildah integration (podman build uses Buildah under the hood), OCI native | Not a build tool -- use BuildKit, kaniko, or nerdctl for builds | Not a build tool -- use Buildah, kaniko, or any OCI builder |
| Networking | Built-in bridge, overlay, macvlan, host networking, Docker DNS | CNI-based networking, slirp4netns for rootless, pasta for newer setups | CNI-based, no built-in DNS, relies on K8s networking layer | CNI-based, designed to use K8s networking (Flannel, Calico, Cilium) |
| Systemd Integration | Runs as a systemd service, containers are children of dockerd | Quadlet generates systemd units from containers, auto-update support | Runs as systemd service, minimal cgroup integration | Runs as systemd service, integrates with K8s kubelet lifecycle |
| Best For | Local development, CI/CD pipelines, standalone container hosting | Rootless workloads, systemd-native servers, security-conscious setups | K8s clusters where you need a reliable, minimal runtime | OpenShift / minimal K8s-only environments, compliance-heavy setups |
Monitoring Stack Showdown
infrastructureMetrics collection, alerting, and observability platforms for homelabs.
Prometheus + Grafana for Kubernetes and containers. Netdata for real-time per-second visibility with zero config. Zabbix if you have 100+ hosts and need enterprise features. InfluxDB if you're writing custom metrics from IoT sensors.
| Feature | Prometheus | InfluxDB | Zabbix | Netdata |
|---|---|---|---|---|
| Data Model | Time series with labels (multi-dimensional), metric types: counter, gauge, histogram, summary | Time series with tags and fields, supports events and annotations natively | Items, triggers, and events -- more of a monitoring model than a pure metrics store | Per-second metrics with automatic dimensioning, streams raw data in real time |
| Query Language | PromQL -- powerful, steep learning curve, designed for time series aggregation and alerting | InfluxQL (SQL-like) or Flux (functional scripting) -- Flux is more powerful but harder to learn | Zabbix trigger expressions with macro-based thresholds, calculated items for derived metrics | None -- dashboards are pre-built and auto-generated, API access for programmatic queries |
| Storage Engine | Custom TSDB on local disk, 2-hour blocks compacted over time, WAL for crash recovery | Custom TSM engine (Time-Structured Merge Tree), configurable retention policies per database | PostgreSQL, MySQL, or Oracle backend -- not a native TSDB, relies on partitioning for performance | Custom ring buffer in RAM with optional disk spillover via dbengine, configurable retention tiers |
| Pull vs Push | Pull-based -- Prometheus scrapes /metrics endpoints on a schedule | Push-based -- agents (Telegraf) write data to InfluxDB over HTTP or UDP | Both -- Zabbix agent pushes to server, server can also poll SNMP/IPMI/JMX targets | Pull-based with streaming -- Netdata agent collects locally, streams to a parent for centralized view |
| Alerting Built-in | Alertmanager as a separate component -- routes, silences, grouping, dedup, webhook/email/PagerDuty | Built-in alerting in InfluxDB (checks and notification rules), Kapacitor for advanced pipelines | Full alerting engine with escalation, maintenance windows, dependencies, media types (email/SMS/Slack) | Built-in alarm system with health checks, email and webhook notifications, no external component needed |
| Dashboard | No built-in dashboard -- Grafana is the standard pairing, Prometheus UI is for ad-hoc queries only | Built-in UI for data exploration, Grafana integration for dashboards, Chronograf as legacy option | Built-in dashboard with maps, graphs, topology, screens -- fully self-contained | Built-in real-time dashboard with per-second resolution, zero-config auto-generated charts for every metric |
| Agent Required | Exporters per target -- node_exporter for hosts, cAdvisor for containers, thousands of exporters available | Telegraf agent with 300+ input plugins, or any client library that speaks InfluxDB line protocol | Zabbix agent (active or passive), also agentless via SNMP, IPMI, SSH, JMX, HTTP checks | Netdata agent on each host, auto-discovers everything (disks, network, containers, services) on install |
| Resource Usage | ~200-500 MB RAM depending on cardinality, CPU spikes during compaction, disk is proportional to series count | ~500 MB - 1 GB RAM, heavier on writes, TSM compaction is I/O-intensive on large datasets | ~500 MB - 2 GB RAM for server, PostgreSQL backend adds its own overhead, scales with host count | ~100-300 MB RAM per agent, parent node uses more with streaming, designed to be lightweight per host |
| Plugin Ecosystem | Massive -- thousands of exporters, client libraries in every language, Helm charts for K8s | Telegraf plugin ecosystem (300+ inputs, 40+ outputs), client libraries for Go, Python, Java, JS | Templates for 1000+ devices (Cisco, Dell, HP, Linux, Windows), custom item prototypes, LLD rules | 200+ built-in collectors, auto-detection, community plugins, Netdata Cloud for fleet management |
| Best For | Kubernetes clusters, container environments, microservices -- anywhere you need label-based dimensional metrics | IoT sensor data, custom application metrics, time series workloads where push is easier than pull | Enterprise/SMB monitoring with 100+ hosts, network gear (SNMP), traditional infra with escalation workflows | Quick per-host visibility, real-time debugging, homelabs where you want instant dashboards with no setup |
VPN / Mesh Networking
networkingConnecting homelabs across sites with encrypted tunnels.
Tailscale for the fastest path from zero to connected. Bare WireGuard if you want full control and minimal dependencies. Headscale if you want Tailscale's UX without the SaaS. Netbird if you need SSO integration with your identity provider.
| Feature | Tailscale | WireGuard (bare) | ZeroTier | Netbird |
|---|---|---|---|---|
| Architecture | Hosted coordination server (control plane), direct P2P data plane via DERP relays when NAT is tricky | Self-managed -- you configure keys and endpoints on each peer, no coordination server | Hosted root servers + self-hostable controllers (ZeroTier Central or ztncui), P2P with relay fallback | Self-hosted control plane (Management + Signal + TURN), P2P data plane with WireGuard underneath |
| Setup Complexity | One command per device, OAuth login, done -- the bar is on the floor | Generate keys, exchange public keys, write config files per peer, manage endpoints manually | Install agent, join network by ID, authorize in web console -- moderate | Deploy 3 containers (management, signal, TURN), create account, install agent -- moderate-to-high initial setup |
| NAT Traversal | Excellent -- STUN, DERP relays, hole-punching, works behind CGNAT and most firewalls | None built-in -- you manage port forwarding or use a relay peer with AllowedIPs routing | Good -- ZeroTier handles NAT traversal with root servers, fallback relay when direct fails | Good -- built-in STUN/TURN, WireGuard-based hole-punching, relay fallback for stubborn NATs |
| Speed / Overhead | WireGuard underneath, near-native speed when P2P connects, ~5-10% overhead through DERP relay | Kernel-level, near-wire speed, ~3-5% overhead, the fastest option by design | Userspace networking, ~10-15% overhead vs bare WireGuard, adequate for homelab traffic | WireGuard kernel module underneath, similar to Tailscale performance, relay adds ~5-10% overhead |
| Access Control | ACL policies in JSON/YAML via admin console or GitOps, tag-based rules, group policies | Manual iptables/nftables on each peer, or use AllowedIPs to restrict routing per peer | Flow rules in ZeroTier Central, L2/L3 rules, capability-based access per member | Policy engine with rules, groups, and network routes, managed via web UI or API |
| Mobile Support | iOS and Android apps, one-tap connect, MDM profiles supported | Official WireGuard apps for iOS and Android, manual config import via QR code | iOS and Android apps via ZeroTier One, join network by ID | iOS and Android apps, SSO login on mobile, auto-connect profiles |
| Self-Hosted Option | Headscale -- open-source coordination server, full Tailscale client compatibility, growing fast | Native -- WireGuard is self-hosted by nature, you own every piece | ZeroTier One is open-source, ztncui or ZeroUI for self-hosted controller web UI | Fully self-hosted from day one -- management server, signal server, TURN relay, all open-source |
| MagicDNS | Built-in -- devices get hostname.tailnet-name.ts.net, split DNS for internal domains | No DNS -- you manage /etc/hosts, CoreDNS, or your own resolver | No built-in DNS -- use ZeroNSD (community) or external DNS pointed at ZeroTier IPs | Built-in DNS for peer resolution, custom nameservers configurable per network |
| Subnet Routing | Advertise local subnets from any node, --advertise-routes=10.42.0.0/24, approve in admin console | AllowedIPs + ip forwarding + masquerade -- works but you configure every piece manually | Managed routes in ZeroTier Central, bridge mode for L2 subnet sharing | Network routes configured in management UI, any peer can act as a router for its local subnet |
| SSO Integration | OAuth with Google, Microsoft, GitHub, Okta, OneLogin, custom OIDC -- built-in | None -- WireGuard has no concept of identity beyond public keys | No native SSO -- API tokens and manual member authorization | OIDC integration with Authentik, Keycloak, Okta, Azure AD, Google -- first-class SSO support |
| Pricing | Free for up to 100 devices (personal), paid plans for teams with user management | Free and open-source, zero cost, zero SaaS dependency | Free for up to 25 devices, paid plans for more nodes and priority support | Free and open-source self-hosted, no device limits, optional paid SaaS offering |
| Best For | Connecting homelab sites with minimum effort, remote access to 10.42.0.0/24 from anywhere | Full control freaks, minimal-dependency setups, site-to-site tunnels where both sides have static IPs | L2 networking use cases, gaming over WAN, bridging remote subnets as if they were local | Self-hosted mesh with SSO -- teams that want Tailscale-level UX with Authentik/Keycloak integration |
CI/CD for Homelabs
infrastructureContinuous integration and delivery platforms that run on your own hardware.
Woodpecker CI if you run Gitea/Forgejo and want lightweight pipelines. Gitea Actions if you want GitHub Actions compatibility without GitHub. Jenkins if you need 1,000 plugins and don't mind the RAM. Self-hosted GitHub runners if your code already lives on GitHub.
| Feature | Woodpecker CI | Gitea Actions | GitHub Actions (self-hosted) | Drone | Jenkins |
|---|---|---|---|---|---|
| Config Format | YAML pipeline with steps, services, and matrix definitions | GitHub Actions YAML -- reuses existing workflow syntax and marketplace actions | GitHub Actions YAML -- identical syntax, runs on your runner instead of GitHub-hosted | YAML pipeline or Starlark (Python-like) for dynamic pipelines, Jsonnet also supported | Groovy-based Jenkinsfile (declarative or scripted), plus classic freestyle job UI |
| Container-Native | Every step runs in a container by default, Docker or Kubernetes backend | Each job runs in a container, supports Docker and LXC backends via act_runner | Runs on the host by default, container jobs require Docker on the runner | Every step is a container, built for Docker-first workflows from the start | Containers via Docker/Kubernetes plugin, but not container-native -- agents run on bare hosts by default |
| Resource Usage | ~50-80 MB RAM for the server, agents are lightweight Go binaries | Bundled with Gitea -- no separate service, adds ~30-50 MB to Gitea's footprint | Runner agent is ~100 MB RAM, but jobs can consume whatever the host allows | ~50-80 MB RAM for server, agents are lightweight, similar footprint to Woodpecker | ~512 MB - 1 GB+ RAM idle, Java-based, grows with plugins and build history, notoriously hungry |
| Forge Integration | Native integration with Gitea, Forgejo, GitHub, GitLab, Bitbucket via OAuth | Native to Gitea -- repositories, PRs, and status checks are first-class, zero config | GitHub only -- triggers from GitHub webhooks, status checks posted back to GitHub | Gitea, GitHub, GitLab, Bitbucket, Gogs -- broad forge support via OAuth | Any Git source via plugins (Git, GitHub, GitLab, Bitbucket), webhook-triggered or polled |
| Secrets Management | Per-repo and global secrets in UI, encrypted at rest, masked in logs | Repository and organization secrets in Gitea UI, compatible with GitHub Actions secrets syntax | GitHub-managed secrets (repo/org/environment level), encrypted, passed to runner at job time | Per-repo and global secrets, encrypted in DB, plugins for Vault and AWS Secrets Manager | Credentials plugin, HashiCorp Vault integration, folder-level credentials, extensive but complex |
| Matrix Builds | Built-in matrix with YAML syntax, fan-out across variable combinations | Full matrix strategy support -- identical syntax to GitHub Actions | Full matrix strategy support -- native GitHub Actions feature | Matrix via Starlark or Jsonnet for dynamic pipeline generation, YAML does not support native matrix | Matrix builds via Declarative Pipeline axis/axes directive, or scripted loop in Groovy |
| Caching | Volume-based caching between steps, S3-compatible cache plugin available | actions/cache compatible, configurable cache storage backend in act_runner | actions/cache works natively, caches stored on the runner host or in GitHub's cache service | Volume mounts for caching, S3 cache plugin, tmpfs for ephemeral caching | Stash/unstash for artifacts, workspace caching via plugins, or shared NFS mounts across agents |
| Plugin Ecosystem | Growing library of container-based plugins (Slack, S3, Docker, Helm), compatible with Drone plugins | Full GitHub Actions marketplace compatibility -- thousands of existing actions work out of the box | Full GitHub Actions marketplace -- every existing action and reusable workflow works | Large plugin library (200+), plugins are just Docker images with entrypoint conventions | 1,800+ plugins in the Jenkins plugin index, covers every tool and service imaginable, quality varies wildly |
| Self-Hosted Ease | Single binary or Docker container, SQLite or Postgres backend, 5-minute setup | Comes with Gitea -- enable in app.ini, register a runner, done | Install the runner binary on your host, register with a token, configure as a systemd service | Single binary or Docker, SQLite or Postgres, straightforward but Drone Cloud is discontinued | Docker or WAR file, requires Java, plugin installation through web UI, initial setup takes 30+ minutes |
| Best For | Gitea/Forgejo homelabs wanting lightweight, container-native CI with minimal overhead | Gitea users who want GitHub Actions compatibility without leaving the Gitea ecosystem | Teams with code on GitHub who want to run CI on their own hardware for speed or cost savings | Existing Drone users or those wanting a minimal container-native CI with broad forge support | Complex enterprise pipelines, shops that need 1,000+ plugins, legacy Java projects with Groovy expertise |
Database for Self-Hosted Apps
storageChoosing the right database backend for your homelab services.
PostgreSQL for anything that matters. SQLite for single-user apps where simplicity wins. MariaDB if the app requires MySQL compatibility and nothing else.
| Feature | PostgreSQL | MariaDB | SQLite |
|---|---|---|---|
| Concurrent Writers | MVCC-based, handles hundreds of concurrent writers without locking, row-level locks | InnoDB uses row-level locking, solid for multi-user workloads, table-level locks on MyISAM only | Single writer at a time -- WAL mode allows concurrent reads with one writer, fine for low-traffic self-hosted apps |
| JSON Support | Native JSONB type with GIN indexing, path queries, partial updates -- treat Postgres as a document store when needed | JSON type with JSON_TABLE and path extraction functions, less mature than Postgres JSONB but usable | json_extract() and json_each() functions, no indexing on JSON paths, adequate for config blobs |
| Full-Text Search | Built-in tsvector/tsquery with ranking, language-aware stemming, GIN/GiST indexes, no extension needed | Built-in FULLTEXT indexes on InnoDB (MySQL 5.6+), boolean and natural language modes | FTS5 extension built into most distributions, fast for small-to-medium datasets, no stemming config |
| Replication | Streaming replication (async/sync), logical replication for selective table sync, pglogical for cross-version | MariaDB replication (async, semi-sync, Galera for multi-primary), binlog-based, mature and well-documented | No built-in replication -- the database is a single file, replicate by copying the file or using Litestream for WAL shipping |
| Backup Tooling | pg_dump for logical, pg_basebackup for physical, pgBackRest for incremental/parallel/S3, WAL archiving for PITR | mariadb-dump for logical, mariabackup (Percona XtraBackup fork) for hot physical backups, binlog for PITR | cp the .db file (with WAL checkpoint first), sqlite3 .backup command for online backup, Litestream for continuous replication to S3 |
| RAM Usage (idle) | ~30-50 MB with default config, shared_buffers defaults to 128 MB (tunable), connection pooling (PgBouncer) recommended for many apps | ~80-150 MB with default InnoDB buffer pool (128 MB default), tunable, lighter than Postgres at idle with small buffer pool | Zero server process -- it's a library linked into the application, no idle RAM cost beyond the app itself |
| Max Practical Size | Multi-TB databases in production, partitioning for tables above 100 GB, tested at petabyte scale in enterprise | Multi-TB is common, partitioning available, InnoDB handles large tables well with proper indexing | Practical limit around 1 TB (hard limit is 281 TB), performance degrades with many concurrent writers above ~100 GB |
| Extension Ecosystem | pgvector for AI embeddings, PostGIS for geospatial, pg_cron for scheduling, TimescaleDB for time series, pg_stat_statements for query analysis -- the extension ecosystem is unmatched | Limited extensions compared to Postgres, connect/spider for federation, ColumnStore for analytics, Galera for clustering | Minimal -- FTS5, R-Tree, JSON1 are built-in, no third-party extension ecosystem comparable to Postgres |
| Docker Image Size | ~80 MB (postgres:16-alpine), well-optimized, includes all common extensions | ~120 MB (mariadb:11), heavier due to bundled tools and libraries | No image needed -- embedded in the application, zero additional containers |
| Used By | Immich, Authentik, Nextcloud, Grafana, Gitea, Miniflux, Tandoor, Paperless-ngx -- the default for serious self-hosted apps | Nextcloud (default), WordPress, Matomo, BookStack, Firefly III -- common in PHP-based self-hosted apps | Vaultwarden, Actual Budget, Calibre-web, Pihole (FTL), Home Assistant, Mealie -- dominant in single-user apps |
| Tuning Complexity | Moderate -- shared_buffers, work_mem, effective_cache_size matter, PGTune generates configs based on RAM/CPU | Moderate -- innodb_buffer_pool_size is the main knob, mysqltuner.pl helps, less tuning surface than Postgres | Near zero -- PRAGMA journal_mode=WAL and PRAGMA busy_timeout are the two things you might set, otherwise it just works |
| Best For | Primary database for multi-service homelabs -- run one Postgres instance and point Immich, Authentik, Gitea, and Grafana at it | Legacy PHP apps that require MySQL compatibility, WordPress hosting, environments migrating from MySQL | Single-user self-hosted apps where zero-ops matters -- Vaultwarden, Actual, anything that ships with SQLite by default |
NAS OS / Storage Platform
storageDedicated operating systems and platforms for network-attached storage.
TrueNAS Scale if data integrity is non-negotiable and you have ECC RAM. Unraid if you want to mix drive sizes and add disks one at a time. OpenMediaVault for a lean Debian-based NAS on minimal hardware. Synology if you want appliance-level reliability and don't mind the vendor lock-in.
| Feature | TrueNAS Scale | Unraid | OpenMediaVault | Synology DSM |
|---|---|---|---|---|
| Filesystem | ZFS exclusively -- pools, vdevs, datasets, ARC cache, snapshots, scrubbing, the whole stack | XFS for individual array drives, Btrfs or XFS for cache pool, parity at the array level (not filesystem) | Your choice -- ext4, Btrfs, XFS, or ZFS via plugin, filesystem is independent of OMV itself | Btrfs on newer models (checksum, snapshots, replication), ext4 on older/value units |
| Docker/Container Support | Built-in Docker and Kubernetes (K3s) via Apps system, Helm charts in TrueCharts catalog | Docker via Dockerman built-in, Community Apps plugin for one-click installs -- the Unraid app ecosystem is massive | Docker via the compose plugin or Portainer, OMV-extras repo adds Docker setup in one click | Docker via Container Manager (DSM 7.2+), previously Docker package, limited to Synology's UI wrapper |
| VM Support | KVM-based VMs through the web UI, PCI passthrough supported, not the primary focus | KVM/QEMU VMs with GPU passthrough, libvirt backend, well-documented for Plex GPU transcoding | KVM via libvirt plugin, functional but not a first-class feature, better to run OMV under Proxmox | Virtual Machine Manager with KVM, USB and limited PCIe passthrough, decent for light VM use |
| Expandability (add single drives) | No single-drive expansion of existing vdevs -- add entire new vdevs to pool, or replace all drives in a vdev with larger ones | Add a single drive to the array anytime, parity rebuilds automatically, mix any size -- the defining Unraid feature | Add drives and mount them independently or create new arrays, mdadm RAID or mergerfs for pooling | Insert a drive into an open bay, Synology Hybrid RAID expands automatically -- appliance-level ease |
| Data Protection | ZFS checksums, scrubbing, mirror/raidz/raidz2/raidz3, self-healing from redundancy, send/recv replication | Single or dual parity protection, real-time for parity drives only, no checksumming on data drives by default | Depends on filesystem and RAID choice -- mdadm RAID, SnapRAID plugin, or Btrfs RAID1/10 | Btrfs checksums and scrubbing on supported models, RAID via SHR (Synology Hybrid RAID), auto-rebuilds |
| App Ecosystem | TrueCharts (community Helm charts, 500+ apps), official iXsystems catalog, growing but sometimes unstable | Community Applications plugin -- 500+ one-click Docker apps, templates maintained by the community, extremely active | OMV-extras repo, Docker with Portainer for any container, smaller curated plugin set but full Docker access | Synology Package Center -- curated, stable, limited selection, Docker for anything not in the catalog |
| Web UI Quality | Modern Angular-based UI, comprehensive but dense, storage management is excellent, app management is evolving | Clean, functional UI, array/Docker/VM management in one place, real-time dashboard with per-disk stats | Functional Debian-admin UI, not as polished as competitors, gets the job done without frills | Best-in-class UI -- polished, responsive, consistent, DSM feels like a desktop OS in the browser |
| Community / Docs | Large community (forums, Reddit), official iXsystems documentation, TrueCharts Discord for app support | Massive community forums, active subreddit, SpaceInvaderOne YouTube tutorials, extensive wiki | Active forums and subreddit, documentation is adequate but less comprehensive than TrueNAS or Unraid | Huge user base, Synology knowledge base is excellent, community forums are active but less homelab-focused |
| Hardware Requirements | 8 GB RAM minimum (16+ recommended for ZFS), ECC RAM strongly recommended, x86_64 only | 2 GB RAM minimum (8+ recommended with Docker), any x86_64 hardware, runs well on older gear | 1 GB RAM minimum (2+ recommended), runs on Raspberry Pi, old laptops, anything Debian supports | Synology hardware only -- locked to their NAS units, can't install on custom hardware (XPEnology exists but unsupported) |
| Licensing / Cost | Free and open source, no feature restrictions, optional iXsystems support contracts | $59/$89/$129 one-time license (Basic/Plus/Pro), no subscription, Pro required for >12 drives and multiple arrays | Free and open source, no restrictions, community-maintained | Hardware purchase includes DSM license, no recurring software cost, but hardware is proprietary and premium-priced |
| Best For | Dedicated NAS/SAN where ZFS data integrity is the priority, ECC-equipped builds at 10.42.0.x serving iSCSI or NFS to Proxmox | Mixed-drive NAS with Docker apps, media servers (Plex/Jellyfin), flexible homelabs where drives accumulate over time | Lean NAS on minimal hardware, Raspberry Pi NAS builds, Debian users who want a web UI on top of standard Linux tools | Set-and-forget NAS appliance, users who value stability and polish over customization, small business file sharing |
Identity Provider / SSO
securitySingle sign-on and identity management for self-hosted services.
Authelia if you just need MFA in front of your reverse proxy and want minimal resource usage. Authentik if you want a full identity provider with visual flow design and LDAP. Keycloak for enterprise deployments or Java shops.
| Feature | Authentik | Authelia | Keycloak |
|---|---|---|---|
| Type | Full identity provider -- handles user lifecycle, authentication, and authorization from one platform | Forward authentication proxy -- sits between your reverse proxy and your apps, adds MFA and basic SSO | Full identity provider -- enterprise-grade IAM originally built by Red Hat for JBoss middleware |
| Protocols | OAuth2, OIDC, SAML, LDAP outpost, proxy authentication -- covers every protocol a homelab app might need | Forward auth (Traefik/Nginx/Caddy), OpenID Connect for apps that support it, no SAML, no LDAP | OAuth2, OIDC, SAML 2.0, LDAP/Kerberos federation, UMA for fine-grained authorization -- the broadest protocol support |
| Resource Usage | ~2 GB RAM total across server + worker + PostgreSQL + Redis -- not lightweight | ~30-50 MB RAM for the single binary, optional Redis for session storage -- remarkably lightweight | ~1-1.5 GB RAM for the Keycloak server + PostgreSQL -- heavier than Authentik on a per-component basis |
| Configuration | Web UI for everything (flows, providers, policies), YAML/Terraform for infrastructure-as-code | YAML files only -- configuration.yml and users_database.yml, no web UI for config management | Full web admin console with realm/client/role management, REST API, partial YAML import/export |
| User Directory | Built-in user directory with groups, attributes, and profile management, LDAP outpost to expose users to legacy apps | No built-in user directory -- requires external LDAP, file-based users, or OIDC delegation to another provider | Built-in user federation with LDAP/AD sync, custom user attributes, self-service account management |
| Flow Designer | Visual flow designer in the web UI -- drag-and-drop authentication stages (MFA, consent, enrollment, recovery) | No flow designer -- authentication flow is defined in YAML configuration with access control rules | Authentication flows configurable in the admin console, but no visual drag-and-drop -- forms-based flow editor |
| MFA Methods | TOTP, WebAuthn/FIDO2, Duo push, SMS (via provider), static recovery codes | TOTP, WebAuthn/FIDO2, Duo push, mobile push via Authelia companion app | TOTP, WebAuthn/FIDO2, OTP via email/SMS (requires SPI), recovery codes, conditional per-client MFA |
| Traefik Integration | ForwardAuth middleware pointing at the Authentik outpost, plus native OIDC for apps that support it | ForwardAuth middleware -- Authelia was built for this exact pattern, first-class Traefik support | Proxy headers (X-Forwarded-User) via Keycloak Gatekeeper or oauth2-proxy, less turnkey than Authelia/Authentik |
| Docker Complexity | 3-4 containers: server, worker, PostgreSQL, Redis -- docker-compose with 4 services minimum | 1 container + optional Redis for HA session storage -- the simplest deployment of the three | 1-2 containers: Keycloak server + PostgreSQL (or embedded H2 for testing) -- moderate complexity |
| Best For | Full-featured homelab IdP -- SSO for every app, LDAP for legacy services, visual flow customization, user self-service | MFA gateway in front of Traefik/Nginx with minimal resources -- protect 20 apps with a single container and a YAML file | Enterprise environments, Java shops, Red Hat ecosystem, organizations needing SAML federation with external partners |
Password Manager
securitySelf-hosted password management for individuals and teams.
Vaultwarden for families and teams who want the full Bitwarden experience self-hosted. KeePassXC if you trust yourself with a local encrypted file and want zero server dependencies. Passbolt for teams that need audit trails and group-based sharing.
| Feature | Vaultwarden | KeePassXC | Passbolt |
|---|---|---|---|
| Architecture | Server (Rust) + Bitwarden clients (browser, desktop, mobile) -- centralized vault synced to all devices | Local encrypted database file (KDBX format) -- no server, no sync, the database is just a file on disk | Server (PHP/CakePHP) + browser extension + mobile apps -- centralized, team-oriented vault |
| Self-Hosted | Single Docker container, SQLite backend (or MySQL/Postgres), runs on a Raspberry Pi at 10.42.0.x with ~50 MB RAM | Not applicable -- there's no server, the KDBX file lives wherever you put it (Syncthing, NFS share, USB drive) | Docker stack with PHP server + PostgreSQL + email service, heavier deployment (~512 MB RAM) |
| Browser Extension | Official Bitwarden browser extension -- identical to the SaaS Bitwarden, point it at your Vaultwarden URL | KeePassXC-Browser extension communicates with the desktop app over a local socket, no server needed | Dedicated Passbolt browser extension -- required for adding and sharing passwords, tightly integrated |
| Mobile App | Official Bitwarden apps for iOS and Android -- autofill, biometric unlock, vault search, full-featured | KeePassDX (Android) or Strongbox (iOS) -- open-source apps that read KDBX files, autofill supported | Passbolt mobile app for iOS and Android -- vault access, sharing, autofill, requires server connection |
| Sharing | Organizations with collections -- invite family members or team members, granular per-collection access, free for unlimited users on Vaultwarden | Share the KDBX file via Syncthing, NFS, or a shared drive -- concurrent edits risk conflicts, no granular per-entry sharing | Teams and groups with role-based sharing -- share individual passwords or folders, audit log tracks who accessed what |
| Emergency Access | Built-in emergency access -- designate trusted contacts who can request access after a waiting period | No built-in mechanism -- store the master password in a sealed envelope, or share the KDBX file with a trusted person | No built-in emergency access -- admin can reset user accounts but no self-service emergency flow |
| 2FA | TOTP and WebAuthn/FIDO2 for vault login, stores TOTP tokens for other services (Authenticator feature) | YubiKey challenge-response for database unlock, key file as a second factor, no TOTP for the database itself | MFA for account login (TOTP, YubiKey), server-enforced MFA policies for teams |
| Offline Access | Cached vault on each client device -- works offline with last-synced data, syncs when back online | Always offline by design -- the KDBX file is the vault, no internet needed ever | No offline access -- requires connection to the Passbolt server for all operations |
| Resource Usage | ~50 MB RAM, negligible CPU -- one of the lightest self-hosted services you can run | Zero server resources -- runs as a desktop app, consumes ~50-80 MB RAM on the client | ~512 MB RAM for the server stack, PostgreSQL adds its own footprint, heavier than Vaultwarden by 10x |
| Backup | sqlite3 database file (or pg_dump if using Postgres), attachments directory, RSA keys -- one cron job to back up | Copy the .kdbx file -- that's the entire backup, store it on multiple drives or in an encrypted cloud backup | PostgreSQL dump + GPG server keys + email config -- more components to back up, document the procedure |
| Best For | Families and homelab operators who want Bitwarden's full UX (autofill, mobile, sharing) on their own hardware | Solo users and privacy maximalists who want an encrypted file with no server, no cloud, no attack surface | Teams and small orgs that need password sharing with audit trails, compliance requirements, and group policies |