Infrastructure as Code for the Homelab
When you manage multiple servers across different locations and architectures — Gentoo, Proxmox, K3s, NAS devices — SSH loops and manual commands don’t scale.
I needed a way to define the state of my infrastructure, not just the steps to fix things.
Ansible for Configuration
One server acts as the Ansible control node. From there, I can target:
- Local hosts - Servers on the home network
- Remote hosts - Servers at a secondary location, connected via VPN
Inventory Structure
[local]
proxmox-host ansible_host=10.42.0.50
k3s-node ansible_host=10.42.0.100
nas ansible_host=10.42.0.10
[remote]
remote-proxmox ansible_host=192.168.20.50
remote-nas ansible_host=192.168.20.10
[gentoo:children]
local
remote
Group by location, by OS, by role — whatever makes sense for your playbooks.
Key Playbooks
System Updates:
# update-gentoo.yml
- hosts: gentoo
become: yes
tasks:
- name: Sync portage
command: emerge --sync
- name: Update world
command: emerge -uDN @world
register: emerge_result
- name: Reboot if kernel updated
reboot:
when: "'sys-kernel' in emerge_result.stdout"
Service Deployment:
# deploy-monitoring.yml
- hosts: all
roles:
- node_exporter
- promtail
vars:
prometheus_server: "10.42.0.100:9090"
loki_server: "10.42.0.100:3100"
One command updates every server. One command deploys monitoring everywhere.
Terraform for External Resources
Ansible handles what’s inside the machines. Terraform handles what’s outside:
- Cloudflare DNS records
- Cloud VPS instances (for off-site backups)
- S3 buckets for Terraform state and backups
# dns.tf
resource "cloudflare_record" "homelab" {
zone_id = var.cloudflare_zone_id
name = "home"
value = var.tunnel_cname
type = "CNAME"
proxied = true
}
Remote State
Terraform state is stored in S3 (encrypted), not locally. If my house burns down, the external infrastructure configuration survives.
terraform {
backend "s3" {
bucket = "homelab-terraform-state"
key = "infrastructure/terraform.tfstate"
region = "us-east-1"
encrypt = true
}
}
The Monorepo
Everything lives in one Git repository:
infrastructure/
├── ansible/
│ ├── inventory/
│ ├── playbooks/
│ └── roles/
├── terraform/
│ ├── cloudflare/
│ ├── backups/
│ └── modules/
└── kubernetes/
└── flux-manifests/
This is the source of truth. If it’s not in Git, it doesn’t exist.
Why Bother?
For one or two servers, this is overkill. But once you have:
- Multiple locations
- Different operating systems
- Services that need consistent configuration
Manual management becomes a liability. You forget what you changed. Servers drift. Things break and you don’t know why.
Infrastructure as Code means:
- Reproducibility - Rebuild any server from scratch
- Version history - See what changed and when
- Consistency - Every server configured the same way
- Documentation - The code is the documentation
Stop SSHing into servers to make changes. Define the state you want and let the tools make it happen.