Skip to main content

Reverse Proxy Showdown

networking

Which reverse proxy fits your homelab?

TL;DR

Traefik for Docker-heavy setups, Caddy for simplicity with automatic HTTPS, NPM if you want a GUI, HAProxy when raw TCP/UDP performance matters.

FeatureTraefikCaddyNPMHAProxy
Auto HTTPS ACME built-in, DNS challenge supportAutomatic by default -- zero configGUI-based Let's EncryptManual setup via haproxy-acme.sh or external tools
Docker Discovery Native via container labelsPlugin (caddy-docker-proxy)Built-in Docker socket listenerNone -- static config or consul/etc template
Config Style Labels + YAML/TOML static configCaddyfile (human-readable) or JSON APIWeb GUI with SQLite backendSingle flat config file, custom syntax
Performance (RPS) ~30k RPS, fine for homelab scale~35k RPS, Go-based with HTTP/3~20k RPS, Node.js overhead~60k+ RPS, purpose-built for load balancing
Learning Curve Moderate -- label syntax is fiddly at firstLow -- Caddyfile is nearly self-documentingVery low -- point and clickHigh -- powerful but config is dense
Middleware/Plugins Rich built-in middleware (rate limit, auth, headers)Modular plugins, build with xcaddyBasic -- headers, SSL, access listsAdvanced ACLs, stick tables, Lua scripting
Dashboard / Monitoring Built-in dashboard, Prometheus metricsAdmin API, no built-in dashboardBuilt-in web UI with status indicatorsStats page, Prometheus exporter via haproxy_exporter
WebSocket Support Native -- auto-detected through entrypointsNative -- transparent proxying with no extra configNative -- built-in WebSocket support via GUI toggleNative -- requires `option http-server-name` and timeout tuning for long-lived connections
gRPC Support Native with h2c backend support, gRPC-web middleware availableNative HTTP/2 and gRPC proxying, automatic h2c to backendsNot supported -- Node.js proxy layer doesn't handle gRPC nativelyFull gRPC support via HTTP/2 backends, requires `proto h2` directive
Rate Limiting Built-in rateLimit middleware with configurable average/burst per source IPBuilt-in rate_limit directive, sliding window per clientNot built-in -- relies on upstream or Cloudflare for rate limitingStick tables with `sc_http_req_rate` -- extremely granular, per-URL, per-IP, per-header
IP Whitelisting ipAllowList middleware, CIDR ranges like 10.42.0.0/24remote_ip matcher in Caddyfile, supports CIDR notationAccess list feature in GUI, per-proxy-host IP restrictionsACLs with `src` keyword, CIDR and individual IPs, map files for large lists
Custom Error Pages errors middleware with custom HTML per status codehandle_errors directive with per-code custom responsesBuilt-in custom error page support per proxy host via GUIerrorfile directive per backend, supports per-status-code HTML files
Best For Dynamic container environments with frequent deploysStatic sites, small stacks, Wireguard tunnelsNon-technical users, quick setup, visual managementHigh-traffic TCP/HTTP load balancing, multi-backend failover

Storage Filesystem Matrix

storage

Choosing a filesystem for your NAS or storage server.

TL;DR

ZFS if you have ECC RAM and matching drives. MergerFS+SnapRAID for mixed/shucked drives on a budget. Btrfs if you want snapshots without the RAM overhead and run a supported RAID level (1, 10). bcachefs if you want ZFS-grade features on a single-disk or small-pool setup without the RAM tax.

FeatureZFSBtrfsMergerFS + SnapRAIDbcachefs
Data Integrity Checksums on all data + metadata, self-healing with redundancyChecksums on metadata, optional data checksums, self-healing with RAID1/10SnapRAID parity checks on schedule, no real-time checksummingFull data + metadata checksums, self-healing with replication, per-inode checksum granularity
RAM Requirements 1 GB per TB of storage is the rule of thumb, ARC cache is hungryMinimal -- works fine with 2-4 GB totalNegligible -- both are userspace tools, no kernel memory pressureLow -- no ARC equivalent, uses kernel page cache like ext4/btrfs
Mixed Drive Sizes Possible but wasteful -- vdevs should match, smallest drive in mirror winsFlexible -- can mix sizes in RAID1, not recommended for RAID5/6Built for this -- each drive is independent, pool is the union of all drivesSupports heterogeneous devices in a single filesystem, tiered storage (SSD + HDD) built-in
Expandability Can add new vdevs but cannot expand existing vdevs (no single-drive add to raidz)Can add devices to existing arrays, online resize supportedAdd a drive any time, run snapraid sync -- done in minutesOnline device add and remove, grow and shrink supported
Snapshots Near-instant, COW-based, send/recv for replicationCOW snapshots, send/receive supported, subvolume-basedSnapRAID snapshots are parity-based, not instant rollbackCOW snapshots with reflink support, snapshot-based send/receive
Scrubbing zpool scrub -- verifies every block against checksumsbtrfs scrub -- checks checksums, auto-repairs with redundancysnapraid scrub -- verifies parity data against file checksumsbcachefs data scrub -- verifies checksums, repairs from replicas if available
COW Overhead Fragmentation on random writes, recordsize tuning helps, zvols mitigate for VMsFragmentation is notorious on Btrfs, autodefrag mount option helps but doesn't eliminateNo COW -- underlying ext4/xfs writes in place, zero fragmentation concernLower fragmentation than Btrfs due to improved allocator, but still COW-inherent on random writes
Native Encryption Yes -- dataset-level encryption (aes-256-gcm), key per dataset, raw send preserves encryptionNo native encryption -- use dm-crypt/LUKS underneath, adds a layerNo -- use LUKS on underlying drives, SnapRAID operates on cleartextYes -- per-file and per-directory encryption built into the filesystem, ChaCha20/Poly1305 or AES-256
Write Performance Excellent with proper SLOG/ZIL, sync writes can bottleneck withoutGood general performance, RAID5/6 has known write-hole issuesNative filesystem speed (ext4/xfs), SnapRAID is offline so no write penaltyStrong write performance with copygc and tiered writeback, SSD journal tier accelerates HDD pools
Gotchas No shrinking pools, dedup eats RAM (don't enable it), license is CDDL (not GPL)RAID5/6 is still marked unstable -- use RAID1/10 only in productionNo real-time protection -- data written between syncs is unprotected, manual cron setupMainlined in kernel 6.7 but still marked experimental for multi-device RAID, single-device is stable for daily use
Recommended Use Case Bulk NAS storage at 10.42.0.x serving Proxmox VMs, Immich photo libraries, media collections where bit-rot protection is non-negotiableRoot filesystem for Linux desktops/servers, Synology-style snapshots, Timeshift system rollbacks on a homelab workstationShucked-drive NAS on Unraid or standalone Debian box, media libraries where drives get swapped out regularly and parity rebuilds are tolerableSingle-disk root filesystem with snapshots, SSD+HDD tiered pool for a compact homelab server, or as a ZFS alternative when RAM is limited

K8s Distribution Picker

kubernetes

Lightweight Kubernetes distributions for homelab and edge.

TL;DR

K3s is the default pick for homelabs -- lightweight, well-documented, huge community. Talos for immutable infrastructure purists. MicroK8s if you want snap-based simplicity.

FeatureK3sMicroK8sK0sTalos
Resource Footprint ~512 MB RAM minimum, single 70 MB binary~540 MB RAM, runs as a snap package~512 MB RAM, single binary, no host deps~512 MB RAM, entire OS is the cluster -- nothing else runs
HA Support Embedded etcd or external DB (MySQL, Postgres, SQLite)3-node HA with dqlite (distributed SQLite)Embedded etcd or external etcdBuilt-in etcd-based HA, designed for multi-node from day one
Ease of Install One curl command, runs in 30 secondssnap install microk8s, enable addons via microk8s enableSingle binary, k0sctl for multi-node automationWrite Talos image to disk, configure via API -- no SSH, no shell
Default CNI Flannel (can swap to Calico, Cilium)Calico (via addon), can use Flannel/CiliumKube-router (can swap to Calico, Cilium)Flannel by default, Cilium supported
Built-in Components Traefik ingress, ServiceLB, local-path storage, Helm controllerAddons system: dns, storage, ingress, metallb, gpu, istioMinimal -- bring your own everythingMinimal -- designed to be declarative, no bundled extras
Upgrade Path In-place binary swap or system upgrade controllersnap refresh microk8s --channel=1.29k0s update via k0sctl, rolling upgrades supportedAPI-driven rolling upgrades, no SSH needed, fully automated
ARM Support First-class ARM64/ARMv7 support, popular on Raspberry PiARM64 supported, ARMv7 limitedARM64 supportedARM64 supported, SBC images available
Best For Homelab, edge, Raspberry Pi clusters, CI/CD environmentsDevelopers who want quick local K8s, Ubuntu-centric shopsAir-gapped environments, minimal-dependency deploymentsProduction-grade immutable infrastructure, GitOps-native setups

Backup Tool Shootout

storage

Deduplicated backup tools for homelab data protection.

TL;DR

Restic for broad cloud backend support and simplicity. Borg for maximum compression and proven track record. Kopia if you want a modern UI with snapshots. Duplicati if non-technical users need to manage backups.

FeatureResticBorgKopiaDuplicati
Cloud Backends S3, B2, Azure, GCS, SFTP, rclone (30+ providers)SFTP, sshfs -- no native cloud (use rclone mount as workaround)S3, B2, Azure, GCS, SFTP, rclone, local, WebDAVS3, B2, Azure, GCS, FTP, SFTP, WebDAV, OneDrive, Google Drive
Deduplication Content-defined chunking, global dedup per repoChunk-based, global dedup, variable-length chunksContent-defined chunking, global dedup, configurable chunk sizeBlock-level dedup with rolling hash
Encryption AES-256 in CTR mode + Poly1305 MAC, always onAES-256-CTR + HMAC-SHA256, optional but recommendedAES-256-GCM or ChaCha20-Poly1305, configurableAES-256, always on, password-based key derivation
Speed Fast backup and restore, parallel chunk processingFastest compression, slightly slower on initial backup due to indexingVery fast -- Go-based, parallel uploads, aggressive cachingSlower -- C#/.NET overhead, especially on large changesets
GUI CLI only (Backrest or resticprofile for web UI wrappers)CLI only (Vorta for desktop GUI wrapper)Built-in web UI on localhost:51515, plus CLIFull web GUI built-in, runs as a system service
Large Repo Handling Handles multi-TB repos well, prune can be slow on very large reposExcellent -- compaction and check are well-optimizedGood -- parallel maintenance operations, snapshot pinningStruggles above 1-2 TB -- database bloat, slow verification
Restore Granularity Mount repo as FUSE filesystem, restore individual files or full snapshotsFUSE mount or extract, file-level granularityFUSE mount, file-level restore, snapshot browsing via UIWeb UI restore picker, file-level, download as zip
Concurrent Backups No built-in locking -- run one instance per repo, use wrapper scripts to queueRepo-level locking prevents concurrent access, one backup at a time per repoBuilt-in support for concurrent snapshots to the same repo, lock-free architectureOne backup per destination at a time, queue additional jobs in the scheduler
Bandwidth Limiting --limit-upload and --limit-download flags, per-connection throttle in KiB/sNo built-in bandwidth limiting -- use trickle or tc on the network interface--max-upload-speed and --max-download-speed flags, per-session throttleBuilt-in bandwidth throttle in web UI and CLI, per-operation limit
Exclude Patterns --exclude with glob patterns, --exclude-file for lists, --iexclude for case-insensitive--exclude with fnmatch patterns, --exclude-from for file-based lists--add-ignore with gitignore-style patterns, .kopiaignore file support per directoryFilter groups in GUI or CLI, regex and glob, per-backup-set exclude lists
S3-Compatible Backends Native S3 support (AWS, MinIO, Wasabi, Backblaze B2 via S3 API), rclone for everything elseNo native S3 -- mount via rclone FUSE or use borg serve over SSH to a cloud VMNative S3, B2, GCS, Azure, and Wasabi support, also rclone backend for anything elseNative S3, B2, Azure, GCS, WebDAV, built-in for all major providers
Snapshot Browsing restic mount exposes snapshots as FUSE filesystem, browse with any file managerborg mount exposes archives as FUSE, borg list for CLI browsingBuilt-in web UI at localhost:51515 with snapshot browser, file-level restore, diff between snapshots -- the GUI is a genuine feature, not an afterthoughtWeb UI with point-and-click restore, file browser per backup version, download as zip
Scheduling External (cron, systemd timers), or use wrappers like autoresticExternal scheduling only (cron, systemd)Built-in scheduling via CLI or web UI policiesBuilt-in scheduler with retention policies and email notifications

DNS / Ad-Block Solutions

networking

Network-wide DNS filtering and ad blocking.

TL;DR

AdGuard Home for encrypted DNS and a cleaner UI. Pi-hole for the largest community and ecosystem. Technitium if you need a full authoritative DNS server alongside blocking.

FeaturePi-holeAdGuard HomeTechnitium
Web UI Dashboard-focused, query log, per-client stats, group managementModern UI, per-client settings, dark mode, built-in query logFull DNS server UI -- zones, records, DNSSEC, plus blocking dashboard
Encrypted DNS Not built-in -- needs cloudflared or unbound sidecar for DoH/DoT upstreamNative DoH, DoT, DoQ, DNSCrypt -- both as client and serverNative DoH, DoT -- serves encrypted DNS directly to clients
Blocklist Management Gravity-based, supports multiple lists, regex filtering, group assignmentBuilt-in lists, custom filtering rules with adblock syntax, per-client overridesBuilt-in block lists, custom zones, regex, supports ABP filter syntax
DHCP Server Built-in DHCP server as alternative to router DHCPBuilt-in DHCP server with static leasesNo built-in DHCP -- DNS only
Resource Usage ~80 MB RAM, runs on Pi Zero W, SQLite-backed FTL engine~50 MB RAM, single Go binary, no database dependency~120 MB RAM, .NET-based, heavier but full-featured
API / Automation Full REST API, teleporter for backup/restore, Ansible roles availableREST API, YAML config file, easy to containerize and replicateREST API, DNS zone import/export, config backup/restore
Upstream Resolution Unbound for recursive, or forward to any upstream (Cloudflare, Quad9)Built-in upstream options, DNS rewrites, parallel queries to multiple upstreamsBuilt-in recursive resolver (no forwarding needed), conditional forwarding
Multi-Instance Sync Gravity Sync or Orbital Sync for multi-node setupsNative config sync not built-in -- use file sync or container orchestrationZone transfer (AXFR/IXFR) between instances -- standard DNS replication

Hypervisor Comparison

infrastructure

Virtualization platforms for homelab and small business.

TL;DR

Proxmox for most homelabs -- free, full-featured, active community. Unraid if you primarily want NAS + some VMs. XCP-ng for a fully open-source vSphere alternative.

FeatureProxmox VEESXi (Free)XCP-ngUnraid
Licensing Free and open source (AGPL), optional paid support subscriptionFree tier discontinued for new installs as of 2024, requires vSphere licensingFully open source (GPLv2), optional paid support from VatesPaid license required ($59 Basic / $89 Plus / $129 Pro), no free tier
Clustering / HA Built-in clustering with Corosync, live migration, HA with fencingNo clustering in free tier -- requires vCenter ($$)Pool-based clustering, live migration, HA with built-in toolingNo clustering -- single node only, no live migration
Container Support LXC containers as first-class citizens alongside VMsNo native container support -- VMs onlyNo native container support -- VMs onlyDocker via Community Apps plugin, Dockerman built-in
GPU Passthrough Full IOMMU passthrough, mediated (vGPU) with supported cardsFull passthrough, vGPU with NVIDIA GRID/vGPU licensed driversGPU passthrough supported, less community documentationNVIDIA and AMD passthrough via VFIO, well-documented for Plex/Jellyfin
Storage Backends ZFS, LVM, LVM-thin, Ceph, NFS, iSCSI, GlusterFSVMFS, NFS, vSAN (licensed), iSCSILocal LVM, NFS, iSCSI, GlusterFS, XOSANXFS for cache, Btrfs/XFS for array, ZFS plugin available, unRAID parity
Backup / Restore Proxmox Backup Server (free, dedup, incremental), vzdump built-invSphere Data Protection or third-party (Veeam, Nakivo), ghettoVCB scriptXen Orchestra backup with delta/incremental, continuous replicationCommunity Apps: Appdata Backup, CA Backup plugin, manual rsync
Web UI Full management UI, console, task viewer, resource graphsDCUI (console) + vSphere Client (web) -- limited in free tierXen Orchestra Lite (free) or full XOA (built from source or paid appliance)Dashboard with Docker, VMs, shares, array management all in one
Community / Ecosystem Massive community, forums, subreddit, extensive wikiEnterprise-focused, declining homelab relevance since free tier removalGrowing community, Vates-backed development, XCP-ng forumsLarge community, active forums, huge Community Apps ecosystem

Container Runtime Comparison

kubernetes

Container runtimes for development and production.

TL;DR

Docker for development and standalone homelab stacks. Podman for rootless security and systemd integration. containerd if you only need a K8s runtime. CRI-O for minimal, K8s-only clusters.

FeatureDockerPodmancontainerdCRI-O
Architecture Client-server daemon (dockerd), always runningDaemonless, fork-exec model, no persistent processDaemon-based but minimal, gRPC API, used as K8s backendDaemon-based, implements CRI spec directly, OCI-compliant
Rootless Mode Supported since 20.10 but requires additional setup (slirp4netns, uidmap)Rootless by default, first-class support, subuid/subgid mapping built-inRootless supported via rootlesskit, less documentedRootless supported but not a primary use case
Compose / Stacks Docker Compose (V2 built-in as plugin), Swarm mode for clusteringpodman-compose or podman play kube for K8s YAML, Quadlet for systemdNo compose equivalent -- designed as a runtime, not a user toolNo compose -- purely a Kubernetes CRI implementation
K8s Integration Deprecated as K8s runtime since 1.24, still works via cri-dockerd shimGenerate K8s YAML from running pods, play kube for local testingDefault runtime for most K8s distributions (K3s, GKE, EKS, AKS)Default runtime for OpenShift, optimized for K8s and nothing else
Image Building docker build with BuildKit backend, multi-stage, layer cachingBuildah integration (podman build uses Buildah under the hood), OCI nativeNot a build tool -- use BuildKit, kaniko, or nerdctl for buildsNot a build tool -- use Buildah, kaniko, or any OCI builder
Networking Built-in bridge, overlay, macvlan, host networking, Docker DNSCNI-based networking, slirp4netns for rootless, pasta for newer setupsCNI-based, no built-in DNS, relies on K8s networking layerCNI-based, designed to use K8s networking (Flannel, Calico, Cilium)
Systemd Integration Runs as a systemd service, containers are children of dockerdQuadlet generates systemd units from containers, auto-update supportRuns as systemd service, minimal cgroup integrationRuns as systemd service, integrates with K8s kubelet lifecycle
Best For Local development, CI/CD pipelines, standalone container hostingRootless workloads, systemd-native servers, security-conscious setupsK8s clusters where you need a reliable, minimal runtimeOpenShift / minimal K8s-only environments, compliance-heavy setups

Monitoring Stack Showdown

infrastructure

Metrics collection, alerting, and observability platforms for homelabs.

TL;DR

Prometheus + Grafana for Kubernetes and containers. Netdata for real-time per-second visibility with zero config. Zabbix if you have 100+ hosts and need enterprise features. InfluxDB if you're writing custom metrics from IoT sensors.

FeaturePrometheusInfluxDBZabbixNetdata
Data Model Time series with labels (multi-dimensional), metric types: counter, gauge, histogram, summaryTime series with tags and fields, supports events and annotations nativelyItems, triggers, and events -- more of a monitoring model than a pure metrics storePer-second metrics with automatic dimensioning, streams raw data in real time
Query Language PromQL -- powerful, steep learning curve, designed for time series aggregation and alertingInfluxQL (SQL-like) or Flux (functional scripting) -- Flux is more powerful but harder to learnZabbix trigger expressions with macro-based thresholds, calculated items for derived metricsNone -- dashboards are pre-built and auto-generated, API access for programmatic queries
Storage Engine Custom TSDB on local disk, 2-hour blocks compacted over time, WAL for crash recoveryCustom TSM engine (Time-Structured Merge Tree), configurable retention policies per databasePostgreSQL, MySQL, or Oracle backend -- not a native TSDB, relies on partitioning for performanceCustom ring buffer in RAM with optional disk spillover via dbengine, configurable retention tiers
Pull vs Push Pull-based -- Prometheus scrapes /metrics endpoints on a schedulePush-based -- agents (Telegraf) write data to InfluxDB over HTTP or UDPBoth -- Zabbix agent pushes to server, server can also poll SNMP/IPMI/JMX targetsPull-based with streaming -- Netdata agent collects locally, streams to a parent for centralized view
Alerting Built-in Alertmanager as a separate component -- routes, silences, grouping, dedup, webhook/email/PagerDutyBuilt-in alerting in InfluxDB (checks and notification rules), Kapacitor for advanced pipelinesFull alerting engine with escalation, maintenance windows, dependencies, media types (email/SMS/Slack)Built-in alarm system with health checks, email and webhook notifications, no external component needed
Dashboard No built-in dashboard -- Grafana is the standard pairing, Prometheus UI is for ad-hoc queries onlyBuilt-in UI for data exploration, Grafana integration for dashboards, Chronograf as legacy optionBuilt-in dashboard with maps, graphs, topology, screens -- fully self-containedBuilt-in real-time dashboard with per-second resolution, zero-config auto-generated charts for every metric
Agent Required Exporters per target -- node_exporter for hosts, cAdvisor for containers, thousands of exporters availableTelegraf agent with 300+ input plugins, or any client library that speaks InfluxDB line protocolZabbix agent (active or passive), also agentless via SNMP, IPMI, SSH, JMX, HTTP checksNetdata agent on each host, auto-discovers everything (disks, network, containers, services) on install
Resource Usage ~200-500 MB RAM depending on cardinality, CPU spikes during compaction, disk is proportional to series count~500 MB - 1 GB RAM, heavier on writes, TSM compaction is I/O-intensive on large datasets~500 MB - 2 GB RAM for server, PostgreSQL backend adds its own overhead, scales with host count~100-300 MB RAM per agent, parent node uses more with streaming, designed to be lightweight per host
Plugin Ecosystem Massive -- thousands of exporters, client libraries in every language, Helm charts for K8sTelegraf plugin ecosystem (300+ inputs, 40+ outputs), client libraries for Go, Python, Java, JSTemplates for 1000+ devices (Cisco, Dell, HP, Linux, Windows), custom item prototypes, LLD rules200+ built-in collectors, auto-detection, community plugins, Netdata Cloud for fleet management
Best For Kubernetes clusters, container environments, microservices -- anywhere you need label-based dimensional metricsIoT sensor data, custom application metrics, time series workloads where push is easier than pullEnterprise/SMB monitoring with 100+ hosts, network gear (SNMP), traditional infra with escalation workflowsQuick per-host visibility, real-time debugging, homelabs where you want instant dashboards with no setup

VPN / Mesh Networking

networking

Connecting homelabs across sites with encrypted tunnels.

TL;DR

Tailscale for the fastest path from zero to connected. Bare WireGuard if you want full control and minimal dependencies. Headscale if you want Tailscale's UX without the SaaS. Netbird if you need SSO integration with your identity provider.

FeatureTailscaleWireGuard (bare)ZeroTierNetbird
Architecture Hosted coordination server (control plane), direct P2P data plane via DERP relays when NAT is trickySelf-managed -- you configure keys and endpoints on each peer, no coordination serverHosted root servers + self-hostable controllers (ZeroTier Central or ztncui), P2P with relay fallbackSelf-hosted control plane (Management + Signal + TURN), P2P data plane with WireGuard underneath
Setup Complexity One command per device, OAuth login, done -- the bar is on the floorGenerate keys, exchange public keys, write config files per peer, manage endpoints manuallyInstall agent, join network by ID, authorize in web console -- moderateDeploy 3 containers (management, signal, TURN), create account, install agent -- moderate-to-high initial setup
NAT Traversal Excellent -- STUN, DERP relays, hole-punching, works behind CGNAT and most firewallsNone built-in -- you manage port forwarding or use a relay peer with AllowedIPs routingGood -- ZeroTier handles NAT traversal with root servers, fallback relay when direct failsGood -- built-in STUN/TURN, WireGuard-based hole-punching, relay fallback for stubborn NATs
Speed / Overhead WireGuard underneath, near-native speed when P2P connects, ~5-10% overhead through DERP relayKernel-level, near-wire speed, ~3-5% overhead, the fastest option by designUserspace networking, ~10-15% overhead vs bare WireGuard, adequate for homelab trafficWireGuard kernel module underneath, similar to Tailscale performance, relay adds ~5-10% overhead
Access Control ACL policies in JSON/YAML via admin console or GitOps, tag-based rules, group policiesManual iptables/nftables on each peer, or use AllowedIPs to restrict routing per peerFlow rules in ZeroTier Central, L2/L3 rules, capability-based access per memberPolicy engine with rules, groups, and network routes, managed via web UI or API
Mobile Support iOS and Android apps, one-tap connect, MDM profiles supportedOfficial WireGuard apps for iOS and Android, manual config import via QR codeiOS and Android apps via ZeroTier One, join network by IDiOS and Android apps, SSO login on mobile, auto-connect profiles
Self-Hosted Option Headscale -- open-source coordination server, full Tailscale client compatibility, growing fastNative -- WireGuard is self-hosted by nature, you own every pieceZeroTier One is open-source, ztncui or ZeroUI for self-hosted controller web UIFully self-hosted from day one -- management server, signal server, TURN relay, all open-source
MagicDNS Built-in -- devices get hostname.tailnet-name.ts.net, split DNS for internal domainsNo DNS -- you manage /etc/hosts, CoreDNS, or your own resolverNo built-in DNS -- use ZeroNSD (community) or external DNS pointed at ZeroTier IPsBuilt-in DNS for peer resolution, custom nameservers configurable per network
Subnet Routing Advertise local subnets from any node, --advertise-routes=10.42.0.0/24, approve in admin consoleAllowedIPs + ip forwarding + masquerade -- works but you configure every piece manuallyManaged routes in ZeroTier Central, bridge mode for L2 subnet sharingNetwork routes configured in management UI, any peer can act as a router for its local subnet
SSO Integration OAuth with Google, Microsoft, GitHub, Okta, OneLogin, custom OIDC -- built-inNone -- WireGuard has no concept of identity beyond public keysNo native SSO -- API tokens and manual member authorizationOIDC integration with Authentik, Keycloak, Okta, Azure AD, Google -- first-class SSO support
Pricing Free for up to 100 devices (personal), paid plans for teams with user managementFree and open-source, zero cost, zero SaaS dependencyFree for up to 25 devices, paid plans for more nodes and priority supportFree and open-source self-hosted, no device limits, optional paid SaaS offering
Best For Connecting homelab sites with minimum effort, remote access to 10.42.0.0/24 from anywhereFull control freaks, minimal-dependency setups, site-to-site tunnels where both sides have static IPsL2 networking use cases, gaming over WAN, bridging remote subnets as if they were localSelf-hosted mesh with SSO -- teams that want Tailscale-level UX with Authentik/Keycloak integration

CI/CD for Homelabs

infrastructure

Continuous integration and delivery platforms that run on your own hardware.

TL;DR

Woodpecker CI if you run Gitea/Forgejo and want lightweight pipelines. Gitea Actions if you want GitHub Actions compatibility without GitHub. Jenkins if you need 1,000 plugins and don't mind the RAM. Self-hosted GitHub runners if your code already lives on GitHub.

FeatureWoodpecker CIGitea ActionsGitHub Actions (self-hosted)DroneJenkins
Config Format YAML pipeline with steps, services, and matrix definitionsGitHub Actions YAML -- reuses existing workflow syntax and marketplace actionsGitHub Actions YAML -- identical syntax, runs on your runner instead of GitHub-hostedYAML pipeline or Starlark (Python-like) for dynamic pipelines, Jsonnet also supportedGroovy-based Jenkinsfile (declarative or scripted), plus classic freestyle job UI
Container-Native Every step runs in a container by default, Docker or Kubernetes backendEach job runs in a container, supports Docker and LXC backends via act_runnerRuns on the host by default, container jobs require Docker on the runnerEvery step is a container, built for Docker-first workflows from the startContainers via Docker/Kubernetes plugin, but not container-native -- agents run on bare hosts by default
Resource Usage ~50-80 MB RAM for the server, agents are lightweight Go binariesBundled with Gitea -- no separate service, adds ~30-50 MB to Gitea's footprintRunner agent is ~100 MB RAM, but jobs can consume whatever the host allows~50-80 MB RAM for server, agents are lightweight, similar footprint to Woodpecker~512 MB - 1 GB+ RAM idle, Java-based, grows with plugins and build history, notoriously hungry
Forge Integration Native integration with Gitea, Forgejo, GitHub, GitLab, Bitbucket via OAuthNative to Gitea -- repositories, PRs, and status checks are first-class, zero configGitHub only -- triggers from GitHub webhooks, status checks posted back to GitHubGitea, GitHub, GitLab, Bitbucket, Gogs -- broad forge support via OAuthAny Git source via plugins (Git, GitHub, GitLab, Bitbucket), webhook-triggered or polled
Secrets Management Per-repo and global secrets in UI, encrypted at rest, masked in logsRepository and organization secrets in Gitea UI, compatible with GitHub Actions secrets syntaxGitHub-managed secrets (repo/org/environment level), encrypted, passed to runner at job timePer-repo and global secrets, encrypted in DB, plugins for Vault and AWS Secrets ManagerCredentials plugin, HashiCorp Vault integration, folder-level credentials, extensive but complex
Matrix Builds Built-in matrix with YAML syntax, fan-out across variable combinationsFull matrix strategy support -- identical syntax to GitHub ActionsFull matrix strategy support -- native GitHub Actions featureMatrix via Starlark or Jsonnet for dynamic pipeline generation, YAML does not support native matrixMatrix builds via Declarative Pipeline axis/axes directive, or scripted loop in Groovy
Caching Volume-based caching between steps, S3-compatible cache plugin availableactions/cache compatible, configurable cache storage backend in act_runneractions/cache works natively, caches stored on the runner host or in GitHub's cache serviceVolume mounts for caching, S3 cache plugin, tmpfs for ephemeral cachingStash/unstash for artifacts, workspace caching via plugins, or shared NFS mounts across agents
Plugin Ecosystem Growing library of container-based plugins (Slack, S3, Docker, Helm), compatible with Drone pluginsFull GitHub Actions marketplace compatibility -- thousands of existing actions work out of the boxFull GitHub Actions marketplace -- every existing action and reusable workflow worksLarge plugin library (200+), plugins are just Docker images with entrypoint conventions1,800+ plugins in the Jenkins plugin index, covers every tool and service imaginable, quality varies wildly
Self-Hosted Ease Single binary or Docker container, SQLite or Postgres backend, 5-minute setupComes with Gitea -- enable in app.ini, register a runner, doneInstall the runner binary on your host, register with a token, configure as a systemd serviceSingle binary or Docker, SQLite or Postgres, straightforward but Drone Cloud is discontinuedDocker or WAR file, requires Java, plugin installation through web UI, initial setup takes 30+ minutes
Best For Gitea/Forgejo homelabs wanting lightweight, container-native CI with minimal overheadGitea users who want GitHub Actions compatibility without leaving the Gitea ecosystemTeams with code on GitHub who want to run CI on their own hardware for speed or cost savingsExisting Drone users or those wanting a minimal container-native CI with broad forge supportComplex enterprise pipelines, shops that need 1,000+ plugins, legacy Java projects with Groovy expertise

Database for Self-Hosted Apps

storage

Choosing the right database backend for your homelab services.

TL;DR

PostgreSQL for anything that matters. SQLite for single-user apps where simplicity wins. MariaDB if the app requires MySQL compatibility and nothing else.

FeaturePostgreSQLMariaDBSQLite
Concurrent Writers MVCC-based, handles hundreds of concurrent writers without locking, row-level locksInnoDB uses row-level locking, solid for multi-user workloads, table-level locks on MyISAM onlySingle writer at a time -- WAL mode allows concurrent reads with one writer, fine for low-traffic self-hosted apps
JSON Support Native JSONB type with GIN indexing, path queries, partial updates -- treat Postgres as a document store when neededJSON type with JSON_TABLE and path extraction functions, less mature than Postgres JSONB but usablejson_extract() and json_each() functions, no indexing on JSON paths, adequate for config blobs
Full-Text Search Built-in tsvector/tsquery with ranking, language-aware stemming, GIN/GiST indexes, no extension neededBuilt-in FULLTEXT indexes on InnoDB (MySQL 5.6+), boolean and natural language modesFTS5 extension built into most distributions, fast for small-to-medium datasets, no stemming config
Replication Streaming replication (async/sync), logical replication for selective table sync, pglogical for cross-versionMariaDB replication (async, semi-sync, Galera for multi-primary), binlog-based, mature and well-documentedNo built-in replication -- the database is a single file, replicate by copying the file or using Litestream for WAL shipping
Backup Tooling pg_dump for logical, pg_basebackup for physical, pgBackRest for incremental/parallel/S3, WAL archiving for PITRmariadb-dump for logical, mariabackup (Percona XtraBackup fork) for hot physical backups, binlog for PITRcp the .db file (with WAL checkpoint first), sqlite3 .backup command for online backup, Litestream for continuous replication to S3
RAM Usage (idle) ~30-50 MB with default config, shared_buffers defaults to 128 MB (tunable), connection pooling (PgBouncer) recommended for many apps~80-150 MB with default InnoDB buffer pool (128 MB default), tunable, lighter than Postgres at idle with small buffer poolZero server process -- it's a library linked into the application, no idle RAM cost beyond the app itself
Max Practical Size Multi-TB databases in production, partitioning for tables above 100 GB, tested at petabyte scale in enterpriseMulti-TB is common, partitioning available, InnoDB handles large tables well with proper indexingPractical limit around 1 TB (hard limit is 281 TB), performance degrades with many concurrent writers above ~100 GB
Extension Ecosystem pgvector for AI embeddings, PostGIS for geospatial, pg_cron for scheduling, TimescaleDB for time series, pg_stat_statements for query analysis -- the extension ecosystem is unmatchedLimited extensions compared to Postgres, connect/spider for federation, ColumnStore for analytics, Galera for clusteringMinimal -- FTS5, R-Tree, JSON1 are built-in, no third-party extension ecosystem comparable to Postgres
Docker Image Size ~80 MB (postgres:16-alpine), well-optimized, includes all common extensions~120 MB (mariadb:11), heavier due to bundled tools and librariesNo image needed -- embedded in the application, zero additional containers
Used By Immich, Authentik, Nextcloud, Grafana, Gitea, Miniflux, Tandoor, Paperless-ngx -- the default for serious self-hosted appsNextcloud (default), WordPress, Matomo, BookStack, Firefly III -- common in PHP-based self-hosted appsVaultwarden, Actual Budget, Calibre-web, Pihole (FTL), Home Assistant, Mealie -- dominant in single-user apps
Tuning Complexity Moderate -- shared_buffers, work_mem, effective_cache_size matter, PGTune generates configs based on RAM/CPUModerate -- innodb_buffer_pool_size is the main knob, mysqltuner.pl helps, less tuning surface than PostgresNear zero -- PRAGMA journal_mode=WAL and PRAGMA busy_timeout are the two things you might set, otherwise it just works
Best For Primary database for multi-service homelabs -- run one Postgres instance and point Immich, Authentik, Gitea, and Grafana at itLegacy PHP apps that require MySQL compatibility, WordPress hosting, environments migrating from MySQLSingle-user self-hosted apps where zero-ops matters -- Vaultwarden, Actual, anything that ships with SQLite by default

NAS OS / Storage Platform

storage

Dedicated operating systems and platforms for network-attached storage.

TL;DR

TrueNAS Scale if data integrity is non-negotiable and you have ECC RAM. Unraid if you want to mix drive sizes and add disks one at a time. OpenMediaVault for a lean Debian-based NAS on minimal hardware. Synology if you want appliance-level reliability and don't mind the vendor lock-in.

FeatureTrueNAS ScaleUnraidOpenMediaVaultSynology DSM
Filesystem ZFS exclusively -- pools, vdevs, datasets, ARC cache, snapshots, scrubbing, the whole stackXFS for individual array drives, Btrfs or XFS for cache pool, parity at the array level (not filesystem)Your choice -- ext4, Btrfs, XFS, or ZFS via plugin, filesystem is independent of OMV itselfBtrfs on newer models (checksum, snapshots, replication), ext4 on older/value units
Docker/Container Support Built-in Docker and Kubernetes (K3s) via Apps system, Helm charts in TrueCharts catalogDocker via Dockerman built-in, Community Apps plugin for one-click installs -- the Unraid app ecosystem is massiveDocker via the compose plugin or Portainer, OMV-extras repo adds Docker setup in one clickDocker via Container Manager (DSM 7.2+), previously Docker package, limited to Synology's UI wrapper
VM Support KVM-based VMs through the web UI, PCI passthrough supported, not the primary focusKVM/QEMU VMs with GPU passthrough, libvirt backend, well-documented for Plex GPU transcodingKVM via libvirt plugin, functional but not a first-class feature, better to run OMV under ProxmoxVirtual Machine Manager with KVM, USB and limited PCIe passthrough, decent for light VM use
Expandability (add single drives) No single-drive expansion of existing vdevs -- add entire new vdevs to pool, or replace all drives in a vdev with larger onesAdd a single drive to the array anytime, parity rebuilds automatically, mix any size -- the defining Unraid featureAdd drives and mount them independently or create new arrays, mdadm RAID or mergerfs for poolingInsert a drive into an open bay, Synology Hybrid RAID expands automatically -- appliance-level ease
Data Protection ZFS checksums, scrubbing, mirror/raidz/raidz2/raidz3, self-healing from redundancy, send/recv replicationSingle or dual parity protection, real-time for parity drives only, no checksumming on data drives by defaultDepends on filesystem and RAID choice -- mdadm RAID, SnapRAID plugin, or Btrfs RAID1/10Btrfs checksums and scrubbing on supported models, RAID via SHR (Synology Hybrid RAID), auto-rebuilds
App Ecosystem TrueCharts (community Helm charts, 500+ apps), official iXsystems catalog, growing but sometimes unstableCommunity Applications plugin -- 500+ one-click Docker apps, templates maintained by the community, extremely activeOMV-extras repo, Docker with Portainer for any container, smaller curated plugin set but full Docker accessSynology Package Center -- curated, stable, limited selection, Docker for anything not in the catalog
Web UI Quality Modern Angular-based UI, comprehensive but dense, storage management is excellent, app management is evolvingClean, functional UI, array/Docker/VM management in one place, real-time dashboard with per-disk statsFunctional Debian-admin UI, not as polished as competitors, gets the job done without frillsBest-in-class UI -- polished, responsive, consistent, DSM feels like a desktop OS in the browser
Community / Docs Large community (forums, Reddit), official iXsystems documentation, TrueCharts Discord for app supportMassive community forums, active subreddit, SpaceInvaderOne YouTube tutorials, extensive wikiActive forums and subreddit, documentation is adequate but less comprehensive than TrueNAS or UnraidHuge user base, Synology knowledge base is excellent, community forums are active but less homelab-focused
Hardware Requirements 8 GB RAM minimum (16+ recommended for ZFS), ECC RAM strongly recommended, x86_64 only2 GB RAM minimum (8+ recommended with Docker), any x86_64 hardware, runs well on older gear1 GB RAM minimum (2+ recommended), runs on Raspberry Pi, old laptops, anything Debian supportsSynology hardware only -- locked to their NAS units, can't install on custom hardware (XPEnology exists but unsupported)
Licensing / Cost Free and open source, no feature restrictions, optional iXsystems support contracts$59/$89/$129 one-time license (Basic/Plus/Pro), no subscription, Pro required for >12 drives and multiple arraysFree and open source, no restrictions, community-maintainedHardware purchase includes DSM license, no recurring software cost, but hardware is proprietary and premium-priced
Best For Dedicated NAS/SAN where ZFS data integrity is the priority, ECC-equipped builds at 10.42.0.x serving iSCSI or NFS to ProxmoxMixed-drive NAS with Docker apps, media servers (Plex/Jellyfin), flexible homelabs where drives accumulate over timeLean NAS on minimal hardware, Raspberry Pi NAS builds, Debian users who want a web UI on top of standard Linux toolsSet-and-forget NAS appliance, users who value stability and polish over customization, small business file sharing

Identity Provider / SSO

security

Single sign-on and identity management for self-hosted services.

TL;DR

Authelia if you just need MFA in front of your reverse proxy and want minimal resource usage. Authentik if you want a full identity provider with visual flow design and LDAP. Keycloak for enterprise deployments or Java shops.

FeatureAuthentikAutheliaKeycloak
Type Full identity provider -- handles user lifecycle, authentication, and authorization from one platformForward authentication proxy -- sits between your reverse proxy and your apps, adds MFA and basic SSOFull identity provider -- enterprise-grade IAM originally built by Red Hat for JBoss middleware
Protocols OAuth2, OIDC, SAML, LDAP outpost, proxy authentication -- covers every protocol a homelab app might needForward auth (Traefik/Nginx/Caddy), OpenID Connect for apps that support it, no SAML, no LDAPOAuth2, OIDC, SAML 2.0, LDAP/Kerberos federation, UMA for fine-grained authorization -- the broadest protocol support
Resource Usage ~2 GB RAM total across server + worker + PostgreSQL + Redis -- not lightweight~30-50 MB RAM for the single binary, optional Redis for session storage -- remarkably lightweight~1-1.5 GB RAM for the Keycloak server + PostgreSQL -- heavier than Authentik on a per-component basis
Configuration Web UI for everything (flows, providers, policies), YAML/Terraform for infrastructure-as-codeYAML files only -- configuration.yml and users_database.yml, no web UI for config managementFull web admin console with realm/client/role management, REST API, partial YAML import/export
User Directory Built-in user directory with groups, attributes, and profile management, LDAP outpost to expose users to legacy appsNo built-in user directory -- requires external LDAP, file-based users, or OIDC delegation to another providerBuilt-in user federation with LDAP/AD sync, custom user attributes, self-service account management
Flow Designer Visual flow designer in the web UI -- drag-and-drop authentication stages (MFA, consent, enrollment, recovery)No flow designer -- authentication flow is defined in YAML configuration with access control rulesAuthentication flows configurable in the admin console, but no visual drag-and-drop -- forms-based flow editor
MFA Methods TOTP, WebAuthn/FIDO2, Duo push, SMS (via provider), static recovery codesTOTP, WebAuthn/FIDO2, Duo push, mobile push via Authelia companion appTOTP, WebAuthn/FIDO2, OTP via email/SMS (requires SPI), recovery codes, conditional per-client MFA
Traefik Integration ForwardAuth middleware pointing at the Authentik outpost, plus native OIDC for apps that support itForwardAuth middleware -- Authelia was built for this exact pattern, first-class Traefik supportProxy headers (X-Forwarded-User) via Keycloak Gatekeeper or oauth2-proxy, less turnkey than Authelia/Authentik
Docker Complexity 3-4 containers: server, worker, PostgreSQL, Redis -- docker-compose with 4 services minimum1 container + optional Redis for HA session storage -- the simplest deployment of the three1-2 containers: Keycloak server + PostgreSQL (or embedded H2 for testing) -- moderate complexity
Best For Full-featured homelab IdP -- SSO for every app, LDAP for legacy services, visual flow customization, user self-serviceMFA gateway in front of Traefik/Nginx with minimal resources -- protect 20 apps with a single container and a YAML fileEnterprise environments, Java shops, Red Hat ecosystem, organizations needing SAML federation with external partners

Password Manager

security

Self-hosted password management for individuals and teams.

TL;DR

Vaultwarden for families and teams who want the full Bitwarden experience self-hosted. KeePassXC if you trust yourself with a local encrypted file and want zero server dependencies. Passbolt for teams that need audit trails and group-based sharing.

FeatureVaultwardenKeePassXCPassbolt
Architecture Server (Rust) + Bitwarden clients (browser, desktop, mobile) -- centralized vault synced to all devicesLocal encrypted database file (KDBX format) -- no server, no sync, the database is just a file on diskServer (PHP/CakePHP) + browser extension + mobile apps -- centralized, team-oriented vault
Self-Hosted Single Docker container, SQLite backend (or MySQL/Postgres), runs on a Raspberry Pi at 10.42.0.x with ~50 MB RAMNot applicable -- there's no server, the KDBX file lives wherever you put it (Syncthing, NFS share, USB drive)Docker stack with PHP server + PostgreSQL + email service, heavier deployment (~512 MB RAM)
Browser Extension Official Bitwarden browser extension -- identical to the SaaS Bitwarden, point it at your Vaultwarden URLKeePassXC-Browser extension communicates with the desktop app over a local socket, no server neededDedicated Passbolt browser extension -- required for adding and sharing passwords, tightly integrated
Mobile App Official Bitwarden apps for iOS and Android -- autofill, biometric unlock, vault search, full-featuredKeePassDX (Android) or Strongbox (iOS) -- open-source apps that read KDBX files, autofill supportedPassbolt mobile app for iOS and Android -- vault access, sharing, autofill, requires server connection
Sharing Organizations with collections -- invite family members or team members, granular per-collection access, free for unlimited users on VaultwardenShare the KDBX file via Syncthing, NFS, or a shared drive -- concurrent edits risk conflicts, no granular per-entry sharingTeams and groups with role-based sharing -- share individual passwords or folders, audit log tracks who accessed what
Emergency Access Built-in emergency access -- designate trusted contacts who can request access after a waiting periodNo built-in mechanism -- store the master password in a sealed envelope, or share the KDBX file with a trusted personNo built-in emergency access -- admin can reset user accounts but no self-service emergency flow
2FA TOTP and WebAuthn/FIDO2 for vault login, stores TOTP tokens for other services (Authenticator feature)YubiKey challenge-response for database unlock, key file as a second factor, no TOTP for the database itselfMFA for account login (TOTP, YubiKey), server-enforced MFA policies for teams
Offline Access Cached vault on each client device -- works offline with last-synced data, syncs when back onlineAlways offline by design -- the KDBX file is the vault, no internet needed everNo offline access -- requires connection to the Passbolt server for all operations
Resource Usage ~50 MB RAM, negligible CPU -- one of the lightest self-hosted services you can runZero server resources -- runs as a desktop app, consumes ~50-80 MB RAM on the client~512 MB RAM for the server stack, PostgreSQL adds its own footprint, heavier than Vaultwarden by 10x
Backup sqlite3 database file (or pg_dump if using Postgres), attachments directory, RSA keys -- one cron job to back upCopy the .kdbx file -- that's the entire backup, store it on multiple drives or in an encrypted cloud backupPostgreSQL dump + GPG server keys + email config -- more components to back up, document the procedure
Best For Families and homelab operators who want Bitwarden's full UX (autofill, mobile, sharing) on their own hardwareSolo users and privacy maximalists who want an encrypted file with no server, no cloud, no attack surfaceTeams and small orgs that need password sharing with audit trails, compliance requirements, and group policies