Skip to main content
Infrastructure

Proxmox Virtualization

Proxmox VE hypervisor configuration, VM and LXC inventory, and management access across all three hosts

February 23, 2026

Proxmox Virtualization

Three Proxmox VE hypervisors span both physical sites. Izar-Host is the primary workhorse on the Milky Way, Arcturus-Prime serves as a secondary/reserve hypervisor on the same network, and Tarn-Host anchors the Andromeda with Plex services and build drones.

Hypervisor Overview

HostIPSiteTailscaleWeb UIStatus
Izar-Host10.42.0.201Milky Way100.64.0.141.70https://10.42.0.201:8006Primary, active
Arcturus-Prime10.42.0.200Milky Wayhttps://10.42.0.200:8006Secondary, underutilized
Tarn-Host192.168.20.100Andromeda100.64.0.107.42https://192.168.20.100:8006Active, remote

All three run Proxmox VE and are managed independently — they are not in a Proxmox cluster. Each host has its own storage, its own web UI, and its own set of VMs and containers. There is no shared storage, no live migration, and no HA fencing between them.

Why No Cluster

Proxmox clustering requires reliable low-latency connectivity between nodes. With Izar-Host and Arcturus-Prime on the Milky Way LAN and Tarn-Host on the Andromeda LAN (30-45ms away over Tailscale), a unified cluster would risk split-brain scenarios. The overhead of Corosync quorum across a WAN link is not worth the complexity for this workload. Each hypervisor is managed independently.

Izar-Host (Primary Hypervisor)

IP: 10.42.0.201 Tailscale: 100.64.0.141.70 Web UI: https://10.42.0.201:8006 Site: Milky Way (10.42.0.0/24)

Izar-Host is the most actively used hypervisor. It hosts the primary build drone VM and is the first choice for spinning up new workloads on the Milky Way.

VMs on Izar-Host

drone-Izar-Host (Gentoo VM)

PropertyValue
TypeVirtual Machine (KVM)
IP10.42.0.203
Tailscale100.64.0.101.126
OSGentoo Linux
CPU16 vCPUs
RoleBuild swarm drone

drone-Izar-Host is the highest-capacity build drone on the Milky Way. It runs Gentoo with the same profile and USE flags as the driver workstation (Capella-Outpost). It polls the build swarm orchestrator for packages, compiles them, and uploads the resulting binaries to staging.

Key configuration:

# /etc/portage/make.conf on drone-Izar-Host
MAKEOPTS="-j16"
FEATURES="buildpkg fail-clean -getbinpkg -binpkg-multi-instance"

The VM is configured with:

  • VirtIO network and disk drivers for performance
  • 16 cores passed through (no CPU pinning, shared with Izar-Host host)
  • Sufficient RAM for parallel emerge jobs

Storage on Izar-Host

Izar-Host uses local storage (no shared/networked storage). VM disks live on local ZFS or LVM-thin depending on the original provisioning. There is no backup target configured on Izar-Host itself — backups are manual or handled via external scripts.

Arcturus-Prime (Secondary Hypervisor)

IP: 10.42.0.200 Web UI: https://10.42.0.200:8006 Site: Milky Way (10.42.0.0/24)

Arcturus-Prime is the original server that started the entire project. It was the first Proxmox host, predating Izar-Host and Tarn-Host. Currently underutilized — it has capacity for additional VMs and containers but no active critical workloads. It serves as a reserve hypervisor for overflow or testing.

Current Workloads

No permanent VMs or containers are actively running on Arcturus-Prime. It is available for:

  • Testing new VM configurations before deploying to Izar-Host
  • Overflow capacity if Izar-Host is under heavy build swarm load
  • Development/staging environments

Access

https://10.42.0.200:8006

Same Proxmox credentials as the other hosts. Accessible from the Milky Way LAN or via Tailscale from any peer.

Tarn-Host (Remote Hypervisor)

IP: 192.168.20.100 Tailscale: 100.64.0.107.42 Web UI: https://192.168.20.100:8006 Site: Andromeda (192.168.20.0/24)

Tarn-Host is the sole hypervisor at the Andromeda site. In addition to running VMs and containers, it serves as the Tailscale subnet router for the entire Andromeda LAN, advertising 192.168.20.0/24 into the mesh.

LXC Containers on Tarn-Host

drone-Tarn

PropertyValue
TypeLXC Container
IP192.168.20.196
Tailscale100.64.0.27.91
OSGentoo Linux
CPU14 cores
RoleBuild swarm drone

drone-Tarn is an LXC container running Gentoo for the build swarm. With 14 cores, it is the second-highest-capacity drone in the fleet. Communication with the orchestrator on the Milky Way happens over Tailscale.

# /etc/portage/make.conf on drone-Tarn
MAKEOPTS="-j14"
FEATURES="buildpkg fail-clean -getbinpkg -binpkg-multi-instance"

Polaris-Media

PropertyValue
TypeLXC Container
IP192.168.20.201
RolePlex Media Server (dual instance)

Polaris-Media runs two independent Plex Media Server instances:

InstancePortLibrary
Kraken-daniel32400Primary media library
Kraken-bogie32401Secondary media library

Media files are stored on Meridian-Host (192.168.20.50) and mounted into the container via NFS or bind mount. The NVIDIA Shield (192.168.20.65) is the primary client on the Andromeda LAN. Remote Plex access is available via Plex’s own relay or Tailscale direct connect.

gentoo-builder-Tarn-Host

PropertyValue
TypeLXC Container
RoleGentoo build environment

An additional LXC container on Tarn-Host used for Gentoo package building and testing. This is separate from drone-Tarn and may be used for manual builds, testing ebuilds, or building packages that require specific configuration outside the swarm workflow.

Web UI Access

Direct LAN Access

From the Milky Way LAN:

https://10.42.0.201:8006   # Izar-Host
https://10.42.0.200:8006   # Arcturus-Prime

From the Andromeda LAN:

https://192.168.20.100:8006   # Tarn-Host

Tailscale Access

From any Tailscale peer (any location):

https://100.64.0.141.70:8006    # Izar-Host
https://100.64.0.107.42:8006    # Tarn-Host

Arcturus-Prime does not have a Tailscale IP assigned. To reach it remotely, connect to the Milky Way LAN via Altair-Link’s subnet route and then access 10.42.0.200:8006.

Arcturus-Prime Admin Portal

The Arcturus-Prime web application includes admin pages for Proxmox management:

/admin/proxmox/          # Proxmox overview
/admin/proxmox/Izar-Host        # Izar-Host management
/admin/proxmox/Arcturus-Prime   # Arcturus-Prime management
/admin/proxmox/Tarn-Host     # Tarn-Host management

These pages provide embedded access to Proxmox features including:

  • noVNC: Browser-based VNC console for VMs and containers
  • xterm.js: Browser-based terminal for container shell access
  • VM/container start, stop, and snapshot controls

The admin portal proxies to the Proxmox API, so the user does not need to open port 8006 directly.

VM and Container Management

Creating New VMs

# SSH to the target hypervisor
ssh [email protected]

# Create a VM (example: new build drone)
qm create 200 \
  --name drone-new \
  --memory 8192 \
  --cores 8 \
  --net0 virtio,bridge=vmbr0 \
  --ide2 local:iso/gentoo-install.iso,media=cdrom \
  --scsi0 local-lvm:32 \
  --boot c --bootdisk scsi0

Creating New LXC Containers

# SSH to Tarn-Host
ssh [email protected]

# Create an LXC container
pct create 300 local:vztmpl/gentoo-latest.tar.xz \
  --hostname new-container \
  --cores 4 \
  --memory 4096 \
  --rootfs local-lvm:20 \
  --net0 name=eth0,bridge=vmbr0,ip=192.168.20.210/24,gw=192.168.20.1

Common Operations

# Start/stop VM
qm start 200
qm stop 200

# Start/stop LXC
pct start 300
pct stop 300

# Enter LXC shell
pct enter 300

# Snapshot
qm snapshot 200 pre-update --description "Before system update"

# List all guests
qm list       # VMs
pct list       # Containers

Backup Strategy

There is no automated Proxmox Backup Server (PBS) deployment. Backups are handled manually or via scripts:

  • VM snapshots: Taken before major changes (qm snapshot)
  • LXC snapshots: Same pattern (pct snapshot)
  • Full backups: vzdump to local storage or NFS mount from Meridian-Host/Cassiel-Silo when needed
  • Config backups: /etc/pve/ is version-controlled by Proxmox internally

Setting up a PBS instance or automated vzdump schedule is a known TODO.

Network Configuration

All three hypervisors use a single bridge (vmbr0) connected to their respective physical network. No VLANs, no SDN, no overlay networking at the Proxmox level. VMs and containers get IPs on the same subnet as the hypervisor host.

# Typical /etc/network/interfaces on a Proxmox host
auto vmbr0
iface vmbr0 inet static
    address 10.42.0.201/24    # (or 192.168.20.100/24 for Tarn-Host)
    gateway 10.42.0.1          # (or 192.168.20.1 for Tarn-Host)
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0
proxmoxvirtualizationlxcvmhypervisor