Tailscale for Homelabs: Remote Access Without Port Forwarding

If remote access in your homelab feels messier than it should, there's a good chance you're asking one tool to do three different jobs.

That was my problem for years.

I kept trying to solve all of this with the same stack:

  1. helping family members with tech support
  2. getting back into my own servers, NAS boxes, and dashboards from anywhere
  3. exposing a few web apps safely to the public internet

Those are not the same problem.

Once I split them up, the whole setup got simpler:

Job Best fit in my lab Why
Help a person with their computer RustDesk Remote desktop is the point
Reach private infrastructure from anywhere Tailscale Private network access without port forwarding
Publish a web app safely Cloudflare Tunnel Public HTTPS without opening inbound ports

The biggest win was Tailscale for the second job. It replaced the old mess of dynamic DNS, exposed SSH ports, and half-working VPN configs with something I can actually trust and maintain.

Why Tailscale finally stuck

I had already tried the usual homelab path:

  • port forwarding with dynamic DNS
  • SSH exposed to the internet
  • WireGuard with manual peer management
  • a lot of hoping nothing broke while I was away from home

All of that works right up until it doesn't.

Port forwarding creates permanent attack surface. WireGuard is excellent technology, but the operational overhead gets annoying fast when you have multiple devices, multiple sites, and one of those sites eventually lands behind CGNAT or some other hostile network edge.

Tailscale won for me because it kept the good part of WireGuard and removed most of the manual labor:

  • no port forwarding
  • no babysitting public IP changes
  • easy phone and laptop access
  • subnet routing for devices that cannot run Tailscale themselves
  • tags and ACLs so I can decide who is allowed to route what

The key mental model is this:

Tailscale is not just "a VPN app on your phone." In a homelab, it works best as a private overlay network with a few intentional subnet routers.

The architecture that actually worked

My stable layout ended up looking like this:

  • primary site LAN: 10.0.0.0/24
  • primary-site subnet router: 10.0.0.199
  • remote site LAN: 192.168.50.0/24
  • remote-site subnet router: 192.168.50.100

Admin devices like my laptop and phone run Tailscale directly. Infrastructure that can't run Tailscale, like NAS boxes, printers, and some containers, is reached through the subnet routers.

That means I can sit at a coffee shop and still reach private addresses like:

  • 10.0.0.199 for the local gateway/router box
  • 192.168.50.8 for a NAS on the remote site
  • 192.168.50.201 for a container behind the remote Proxmox host

without opening a single inbound port on the home firewall.

The basic subnet-router setup

On a designated subnet router, I enable IP forwarding and advertise only the subnet that machine is responsible for.

Example for the primary site:

sudo tailscale up \
  --advertise-routes=10.0.0.0/24 \
  --accept-routes \
  --advertise-tags=tag:subnet-router

Example for the remote site:

sudo tailscale up \
  --advertise-routes=192.168.50.0/24 \
  --accept-routes \
  --advertise-tags=tag:subnet-router

Then in Tailscale ACLs, I use tags and auto-approvers so route approvals don't turn into a 2 AM dashboard chore:

{
  "tagOwners": {
    "tag:subnet-router": ["autogroup:admin"]
  },
  "autoApprovers": {
    "routes": {
      "10.0.0.0/24": ["tag:subnet-router"],
      "192.168.50.0/24": ["tag:subnet-router"]
    }
  }
}

That gives me a predictable rule:

  • routers advertise routes
  • admin devices consume routes
  • random hosts do neither unless I have a very specific reason

The first rule: don't solve remote desktop and network access with the same tool

This is the part I wish I'd understood earlier.

If I need to help someone fix a printer, browse their desktop, or change a setting on a family PC, I want RustDesk. That's a remote-desktop problem.

If I need to SSH into my own boxes, hit dashboards, or reach a private NAS IP from another location, I want Tailscale. That's a private-network problem.

If I want a browser-based service available publicly behind login and policy, I want Cloudflare Tunnel. That's a publishing problem.

Could you force one tool to do all three? Kind of.

Should you? Not unless you enjoy unnecessary troubleshooting.

The mistake that broke an entire LAN

One of the nastiest failures in my own setup came from accepting routes on the wrong machine.

A Proxmox host on 10.0.0.200 became reachable only from some peers, not others. ARP worked. The host could talk to the gateway. But traffic to another local machine on 10.0.0.100 failed.

The clue came from checking policy routing:

ip rule show
ip route show table 52
tailscale debug prefs

What I found was brutal:

  • RouteAll: true
  • a Tailscale route for 10.0.0.0/24 had been injected into table 52
  • the hypervisor was trying to send local LAN traffic through tailscale0

In other words, the machine had started treating its own local subnet like a remote Tailscale route.

The immediate fix was removing the bad route. The permanent fix was even simpler:

tailscale set --accept-routes=false

That host was not supposed to consume subnet routes from other peers. It lived on the subnet already.

This is the most important warning in the whole article:

If a box is a hypervisor, bridge host, gateway, or other piece of critical network plumbing, do not casually enable --accept-routes just because it sounds convenient.

Only let a machine accept routes if you actually want it to use remote subnets through Tailscale.

The second rule: advertise the narrowest subnet that matches reality

Another easy way to hurt yourself is advertising a subnet that is way too broad.

Bad:

tailscale up --advertise-routes=10.0.0.0/8

Good:

tailscale up --advertise-routes=10.0.0.0/24

Advertising a broad network when you only need a single /24 is how you accidentally attract traffic that should have stayed local. If the goal is reaching one site LAN, advertise that site LAN precisely.

This matters even more in homelabs because we reuse RFC1918 space everywhere. 10.0.0.0/8 is not a useful promise. It's a routing hazard.

The hard part: Proxmox and LXC return traffic

Tailscale subnet routing gets more interesting when the subnet router is also a Proxmox host and some of the devices behind it are LXCs.

The failure mode looks like this:

  • a remote Tailscale client reaches the Proxmox host
  • the Proxmox host forwards traffic to an LXC on the LAN bridge
  • the LXC sends the response back through its default gateway
  • the return path is not symmetrical
  • packets disappear into the void

If you've ever had a container pingable from one side but unusable end to end, this is probably why.

One working pattern from my setup used:

  1. IP forwarding
  2. reverse-path filtering disabled for the Tailscale path
  3. a policy route for traffic arriving on tailscale0
  4. source NAT so return traffic leaves with the correct source

The critical pieces looked like this:

# sysctl
net.ipv4.conf.tailscale0.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.tailscale0.forwarding = 1
net.ipv4.conf.vmbr0.forwarding = 1
# routing
ip route add 100.64.0.0/10 dev tailscale0 table main
ip route add 192.168.50.0/24 dev vmbr0 src 192.168.50.100 table 52
ip rule add iif tailscale0 lookup 52 prio 100
# source NAT for traffic headed back toward the bridge side
iptables -t nat -A POSTROUTING -s 100.0.0.0/8 -o vmbr0 -j SNAT --to-source 192.168.50.100

The exact commands depend on your bridge names and subnet layout, but the lesson is universal:

When a Proxmox host is acting as both subnet router and LXC/VM gateway, you need to think about the return path, not just the forward path.

If replies do not come back through the same logical path, the setup will look almost correct and still fail.

Platform-specific gotchas I hit

Gentoo/OpenRC: --netfilter-mode=off did not persist

On one Gentoo client, the stable client configuration was:

sudo tailscale up --accept-dns --accept-routes --netfilter-mode=off

The annoying part was that --netfilter-mode=off did not survive reboots the way I wanted, so I had to re-apply it from a startup script.

Ugly? Yes.

Better than wondering why a working laptop suddenly stopped behaving after a reboot? Also yes.

Unraid: the boot drive is not a normal Linux filesystem

On Unraid, I also ran into a very specific problem: the boot drive is FAT32.

That means you do not get normal Unix permissions, which matters when you want Tailscale binaries and sockets to behave like they would on a normal Linux install. The working pattern there was:

  • keep persistent Tailscale state on the flash drive
  • copy the binaries into RAM at boot
  • start tailscaled from a startup script
  • add an explicit rule so local subnet traffic stays local instead of hairpinning back into Tailscale

That extra rule looked like this:

ip rule add to 192.168.20.0/24 lookup main priority 5200

If you ever find two hosts on the same switch taking the scenic route through your mesh network, look for exactly this kind of policy-routing mistake.

My practical client setup

For laptops and admin devices that should consume the remote routes, the setup is intentionally boring:

sudo tailscale up --accept-routes --accept-dns

That is enough for most clients.

What I do not do on clients:

  • I do not enable IP forwarding
  • I do not advertise routes
  • I do not turn every node into a subnet router just because I can

Homelab networking gets more stable the moment you stop being clever everywhere.

How I verify the setup before trusting it

I do not consider remote access done until it passes four checks.

1. Local traffic stays local

On a machine already on the local LAN:

ip route get 10.0.0.199
ping 10.0.0.199

That should use the local interface, not tailscale0, and latency should look like local latency.

If I want to be sure, I watch for accidental tunnel use:

sudo tcpdump -i tailscale0 -n host 10.0.0.199

Then I ping the local host again. I should see no packets on tailscale0.

2. Remote subnet routing works

From a Tailscale client somewhere else:

ping 192.168.50.100
ping 192.168.50.8
ping 192.168.50.201

That tests the router, a LAN device, and a host behind the Proxmox bridge.

3. The critical app path works

For me, that usually means SSH and a browser test:

ssh [email protected]
curl http://192.168.50.201:32400/web

If the basic network works but the app path fails, I stop blaming Tailscale and start looking at the service itself.

4. Reboots don't undo everything

A remote-access setup that only works until the next reboot is still broken.

So I reboot the subnet router, wait a couple of minutes, and verify:

tailscale status
ip rule list
ip route show table 52

If those checks fail after boot, I don't have a Tailscale problem. I have a persistence problem.

When to use Tailscale, RustDesk, or Cloudflare Tunnel

This is the decision tree I wish someone had handed me a long time ago.

Use Tailscale when:

  • you want SSH, SMB, web UI, or API access to private infrastructure
  • you want site-to-site reachability without opening the firewall
  • you need phones and laptops to reach private IP space from anywhere
  • you need subnet routing for devices that cannot run Tailscale themselves

Use RustDesk when:

  • a human needs desktop help
  • you need mouse, keyboard, and screen-sharing, not routed IP access
  • the other side should not have to learn your network

Use Cloudflare Tunnel when:

  • the service is meant to be reached in a browser
  • you want public HTTPS without opening inbound ports
  • you want identity and policy in front of the app
  • the service is something like Grafana, code-server, or a dashboard, not general private network access

A homelab gets much easier to reason about when each tool has one job.

The version I would build again

If I were starting over today, I would do this first:

  1. install Tailscale on my laptop and phone
  2. choose one stable subnet router per site
  3. advertise only the real site subnets
  4. keep --accept-routes off critical infrastructure unless it is truly needed
  5. test local routing and remote routing separately
  6. use RustDesk for people and Cloudflare Tunnel for public web apps

That is the setup that stopped remote access from feeling like an always-on emergency.

Tailscale did not magically remove networking complexity. What it did do was move the complexity into places I can reason about: routes, tags, ACLs, and a handful of intentional subnet routers.

That is a huge improvement over "open a port, update dynamic DNS, and hope nothing weird happens while you're away from home."

And in a homelab, that kind of boring is exactly what you want.