Homelab Notes

The Lab

Quick snapshot of the hardware stack and the services I keep running. Gear changes often, but the goal is the same: build repeatable environments I can break and rebuild without thinking too hard.

Hardware stack

Proxmox cluster
  • Three Dell R420 nodes (12 cores / 32 GB RAM each) plus one Dell R620 (24 cores / 32 GB RAM).
  • All nodes mount the same TrueNAS-backed ZFS pool over 10 GbE NFS.
HP Z4 workstation
  • Standalone Proxmox host with a 10-core W-2155, 128 GB RAM, and RX580 GPU for LLM/automation tests.
TrueNAS SCALE
  • 7.8 TiB ZFS pool with frequent snapshots and monthly offsite backups.
  • Serves nearly every VM/container volume and ingests workstation backups.
Networking
  • Ubiquiti routing/switching with segmented VLANs separating core, lab, DMZ, and game workloads.

Services I host

Most of these projects get logged in GitHub eventually. If you want to dig deeper or borrow a config, check the repos on GitHub .

Portainer + Ansible/AWX

Keeps containers and automation jobs consistent; everything ends up in Git so I can rebuild quickly.

Wazuh + OTEL-fed Grafana

Security monitoring and lab observability. Grafana ingests OpenTelemetry streams direct from Proxmox and services.

Terraform & IaC experiments

Used to bootstrap lab VLANs, user access, and Pelican deployments when I feel like treating homelab work like production.

Home Assistant, DNS, and utility containers

The boring-but-necessary apps that keep the house running (and give me more automation excuses).

Pelican game servers

Automated Minecraft/Rust worlds for friends, with backups and restarts baked in.

FreshRSS

Self-hosted feed reader so I can keep tabs on blogs and CVE feeds without relying on third parties.

Homarr dashboard

Quick-glance UI for the services, containers, and monitoring endpoints I hit every day.

Netboot + provisioning

PXE/Netboot stack for imaging and reinstalling lab nodes without rummaging for USB drives.

Nginx reverse proxy

Handles TLS termination and routing for anything exposed externally, with ACME automation.

OpenWebUI + Ollama

Local LLM interface that rides on the HP Z4’s GPU resources for quick experiments.