user@argobox:~/journal/2026-01-25-the-tunnel-with-two-heads
$ cat entry.md

The Tunnel With Two Heads

○ NOT REVIEWED

The Tunnel With Two Heads

Date: 2026-01-25 Issue: Subdomains randomly returning 404 Root Cause: Two cloudflared instances with different configs on same tunnel Lesson: Check how many heads your tunnel actually has


The Symptom

I tried to git push to gitea.starnet.io. Got a 404.

Refreshed the browser. Page loaded fine.

Pushed again. 404.

Refreshed. Working.

This is the worst kind of bug — intermittent, seemingly random, and impossible to reproduce on demand.


The Investigation

I started testing systematically:

for i in {1..10}; do
  curl -s -o /dev/null -w "%{http_code}\n" "https://gitea.starnet.io"
  sleep 1
done
200
404
200
200
404
200
404
404
200
200

About 40% 404s. Not a timeout issue — the 404s came back instantly. Something was actively returning them.


The Discovery

I have Cloudflare Tunnels connecting my self-hosted services to the internet. One tunnel, running on Alpha-Centauri (10.42.0.199), handles all the routing:

  • gitea.starnet.io → localhost:3000
  • chat.starnet.io → localhost:30000
  • blog.starnet.io → localhost:31033
  • etc.

But when I checked the Cloudflare dashboard, I saw something odd:

Active Connectors: 2

Two? I only have one cloudflared running. Right?

Wrong.

# On Mirach-Maia-Silo (Unraid NAS)
docker ps | grep cloudflared
# cloudflared-masaimara   Up 3 weeks

There it is. A Docker container on my NAS, also running cloudflared. Also connected to the same tunnel.


The Split-Brain

Here’s what was happening:

MachineConfigHostnames
Alpha-CentauriFull configgitea, chat, blog, ai, files, etc.
Mirach-Maia-SiloMinimal configOnly argonaut, bogart

Both were connected to the same tunnel. Cloudflare was load-balancing requests between them.

  • Request hits Alpha-Centauri → Works (has gitea config)
  • Request hits Mirach-Maia-Silo → 404 (doesn’t know about gitea)

About 40% of requests went to the NAS. Those all 404’d.


The Cause

I set up the NAS cloudflared months ago for a specific purpose — accessing the argonaut and bogart services. It had its own config with just those two hostnames.

But I used the same tunnel ID. Because tunnels are just identifiers, right? Why create another one?

Turns out that’s not how Cloudflare thinks about it. Every cloudflared instance connected to a tunnel is an “active connector.” Cloudflare distributes traffic across all of them.

If your connectors have different configs, some requests will hit connectors that don’t know about those hostnames.

Split-brain.


The Fix

Immediate: Stop the NAS cloudflared

ssh [email protected] "docker stop cloudflared-masaimara"

Instantly, all hostnames started working consistently.

Permanent: Separate tunnels

The right fix is giving each location its own tunnel:

  1. Create new tunnel masaimara-tunnel in Cloudflare dashboard
  2. Get new token for NAS
  3. Configure NAS cloudflared with new tunnel ID
  4. Only route argonaut and bogart through that tunnel
  5. Keep main tunnel for Alpha-Centauri only

Now each tunnel has one connector, and there’s no ambiguity about which hostnames go where.


Another Issue (While I Was At It)

The Alpha-Centauri cloudflared was also ignoring its config file. When you run:

cloudflared tunnel run 907e341c-...

Without --config, it prefers dashboard-managed configuration over local config files. I’d added Access authentication via the Zero Trust dashboard, and cloudflared decided the dashboard was the source of truth.

Fixed by explicitly specifying the config:

# /etc/systemd/system/cloudflared.service
ExecStart=/usr/bin/cloudflared tunnel --config /home/argonaut/.cloudflared/config.yml run

What I Learned

  1. One tunnel, one connector. If you need multiple cloudflared instances, give each its own tunnel.

  2. Cloudflare load-balances across connectors. This is great for high availability, terrible for split-brain configs.

  3. Always specify —config. Without it, cloudflared may prefer dashboard configuration over local files.

  4. Check the connector count. If you see more connectors than you expect, you have a mystery cloudflared running somewhere.

  5. Intermittent failures are often load balancer issues. When something works “sometimes,” think about what’s making the routing decision.


The Irony

I set up the NAS cloudflared because I wanted redundancy. “What if Alpha-Centauri goes down? At least argonaut and bogart will still work!”

Instead, I created a system where 40% of all requests failed randomly.

The road to 404 is paved with good intentions.


Config Reference

Alpha-Centauri (Main Tunnel)

tunnel: 907e341c-...
credentials-file: /home/argonaut/.cloudflared/...json
ingress:
  - hostname: gitea.starnet.io
    service: http://localhost:3000
  - hostname: chat.starnet.io
    service: http://localhost:30000
  - hostname: blog.starnet.io
    service: http://10.42.0.199:31033
  # ... all other hostnames
  - service: http_status:404  # Catch-all

Mirach-Maia-Silo (Separate Tunnel — After Fix)

tunnel: [new-tunnel-id]
credentials-file: /etc/cloudflared/[new-id].json
ingress:
  - hostname: argonaut.starnet.io
    service: http://127.0.0.1:8080
  - hostname: bogart.starnet.io
    service: http://127.0.0.1:8081
  - service: http_status:404

When your tunnel has two heads, one of them is lying.