pdg / degaujac.fr
FR
← Writing
Homelab

State of the Lab — Spring 2026

A technical tour of my current homelab: network segmentation, Proxmox, Docker, backups, GitOps, observability, and a few local AI experiments.

8 min read

I have been meaning to write this down for a while. My homelab has slowly moved from “a few services running somewhere” to something that looks a lot more like a small production platform: segmented network, backup strategy, monitoring, GitOps, a few AI experiments, and just enough DNS servers to keep the ancient homelab spirits satisfied.

This post is a snapshot of the lab as of spring 2026: what runs where, why it is built this way, what is intentionally isolated, and what I am currently experimenting with.

It is not meant to be a perfect reference architecture. It is very much a personal lab: practical, evolving, occasionally over-engineered, and useful enough that breaking it is now considered a family-impacting event.

The rack

The core rack currently contains:

  • UniFi Dream Machine Pro, with a 4G LTE backup for connectivity failover.
  • UniFi 48-port Pro PoE switch, which powers and connects most of the network.
  • Beelink S12 Pro, Intel N100, 16 GB RAM, running Proxmox.
  • HP EliteDesk G800 SFF, Intel i7-6700, 32 GB RAM, running Ubuntu Server bare metal.
  • Synology RS816, with 3 × 4 TB drives, used as the main NAS.
  • Dell PowerEdge R730, dual Xeon E5-2560 v4, 64 GB RAM, currently powered off.
  • Raspberry Pi Zero, running a secondary Pi-hole + Unbound instance.

The R730 is the kind of machine that makes you feel powerful until you look at the power draw. It is currently off, but still available for heavier experiments when needed. The smaller nodes handle the always-on workloads much more efficiently.

The Pi Zero exists for one simple reason: DNS should not be a single point of failure. Also, the issue is always DNS. Obviously.

Network architecture

The network stack is mainly UniFi, with VLAN-based segmentation. The current layout is:

  • Trusted LAN: personal devices and trusted infrastructure.
  • IoT: untrusted devices that should not freely talk to the rest of the network.
  • Cameras: isolated and without direct internet access.
  • Guests: for visitors.
  • Agent: isolated and untrusted, used for autonomous agent experiments.

The Agent VLAN deserves its own post. The short version is that I treat anything capable of autonomous behavior as untrusted by default. It gets its own network segment and limited, explicit access to the rest of the lab.

For external connectivity, I use Headscale with a VPS acting as a relay server. That gives me reliable access to the tailnet even with CGNAT constraints. I do not directly expose services to the public internet; access is routed through the tailnet instead.

There are still security considerations, of course. Not exposing services publicly is a good baseline, not a magic shield. The internal boundaries still matter: VLANs, narrow service access, least privilege, and not letting experimental systems roam freely on the LAN.

Storage and backups

The Synology RS816 is the main NAS, with three 4 TB drives. It stores the persistent data that should outlive any single compute node.

The NAS has daily offsite backups configured. This is one of the parts of the lab I try not to be clever about. Local redundancy is useful, but it is not a backup strategy by itself. If the house, the rack, or the filesystem has a bad day, I still want a copy somewhere else.

Proxmox Backup Server also runs in the lab and handles VM/container backup workflows for the Proxmox side.

Compute layout

The lab is split between Proxmox and a bare-metal Ubuntu Server.

The Beelink S12 Pro is the Proxmox node. It runs a mix of infrastructure services, VMs, and LXCs:

  • PostgreSQL
  • Proxmox Backup Server
  • Pi-hole + Unbound
  • Debian VM, used as a Docker host
  • Home Assistant OS VM
  • Ubuntu Server VM for the Hermes Agent stack
  • Agent proxy LXC

The HP EliteDesk runs Ubuntu Server directly on bare metal and hosts Docker workloads that benefit from being outside the Proxmox node, including heavier media and AI-adjacent services.

I like this split. Proxmox gives me flexibility for infrastructure and VM-style workloads, while the bare-metal server keeps some Docker-heavy services simple and predictable.

Docker workloads

The bare-metal Ubuntu Server currently runs several Docker services:

  • Frigate, for camera/NVR workloads.
  • BookLore, for personal library management.
  • A Mullvad VPN tunnel exposed as a Tailscale exit node.
  • Hindsight, used for LLM memory experiments.
  • Manifest, used for LLM request routing for agents.
  • paperless-ngx, for document management.
  • Komodo periphery agent, for remote Docker orchestration.

Some of these are stable daily-use services. Others are part of the current AI/agent experimentation layer and will get their own write-up later.

The Debian VM on Proxmox is also a Docker host and runs the more general application stack:

  • Gitea
  • Homepage
  • InfluxDB, used to track astrophotography sessions
  • Komodo, as the Docker control plane
  • NTP
  • Orbital Sync, to synchronize the Pi-hole instances
  • Synapse
  • Traefik
  • Promtail / Loki / Grafana
  • Uptime Kuma
  • Vaultwarden
  • Pingvin
  • Authentik

This is the part of the lab that feels closest to a small internal platform. It has identity, reverse proxying, observability, uptime monitoring, source hosting, deployment control, and a mix of personal applications.

Observability and operations

The monitoring stack is built around Promtail, Loki, and Grafana, with Uptime Kuma for simple availability checks.

I do not pretend this is enterprise-grade SRE. It is a homelab. But I do want the same basic qualities I would expect from a serious system:

  • I should know when something is down.
  • I should be able to inspect logs without SSHing into every machine.
  • I should be able to correlate failures across services.
  • I should be able to redeploy without manually editing files on servers.

The goal is not complexity for its own sake. The goal is repeatability. When something breaks, I want the fix to become part of the system, not a one-off command I will forget three months later.

GitOps with Komodo

I use Komodo as the Docker control plane, with a GitOps-style workflow.

Compose stacks live in Git. When I push changes to main, the modified stacks are redeployed. This gives me a clean workflow:

  1. Change the compose definition.
  2. Commit and push.
  3. Let the platform apply the change.
  4. Roll back from Git if needed.

This is one of the highest-value improvements I have made to the lab. It makes changes deliberate and reviewable, even when I am the only person reviewing them. It also makes the lab feel closer to how I like production systems to behave: declarative, reproducible, and boring in the right places.

AI and agent experiments

The most experimental part of the lab right now is the AI layer.

I have been testing local LLM workloads, including Qwen 3.6 27B at 256K context on an RTX 4090 using llama.cpp. This is not part of the always-on rack, but it is part of the wider lab environment.

The practical goal is not just “run a model locally because it is cool”, although that is admittedly part of the appeal. I am more interested in what becomes possible when local models are combined with controlled access to personal infrastructure: document search, tagging, memory, service health, astrophotography metadata, and eventually autonomous workflows.

Two current projects sit in that space:

OCR and AI tagging for paperless-ngx

I am working on integrating OCR and AI-assisted tagging into paperless-ngx. The goal is to make document ingestion more useful without turning it into another manual filing system.

A good document system should help answer questions later, not just store PDFs in a slightly prettier folder structure.

Hermes autonomous agent

Hermes is my autonomous self-learning agent experiment.

It lives in the isolated Agent VLAN and does not get broad LAN access. Instead, it can interact with selected internal services through a Fastify API that exposes a narrow set of capabilities: things like astrophotography data, service health, and other controlled endpoints.

This is intentionally designed as a constrained environment. I do not want an agent that can freely explore the LAN or mutate infrastructure because it had an interesting thought at 2 a.m. The architecture is built around explicit boundaries: isolated network, limited API surface, and controlled access to useful data.

There will be a dedicated post about Hermes, the Agent VLAN, Hindsight, Manifest, and the overall agent architecture. This post is already long enough, and I still need people to believe I sometimes go outside.

Why build it this way?

A homelab is a good place to practice the boring parts of engineering.

Running services is easy. Running services that are segmented, backed up, observable, reproducible, and not completely terrifying to modify is the useful part.

This lab lets me work on:

  • Linux administration
  • Networking and VLAN design
  • Reverse proxy and identity patterns
  • Backup and recovery
  • Docker and Compose operations
  • GitOps workflows
  • Observability
  • Local AI integration
  • Security boundaries for autonomous systems

It also gives me a place to test ideas before I bring similar patterns into professional work. The stakes are lower than production, but high enough that bad decisions have consequences. If Home Assistant, DNS, or the network goes down, I will hear about it quickly.

What is next

The next posts will probably go deeper into:

  • The Agent VLAN and how I isolate autonomous systems.
  • Hermes, my self-learning agent experiment.
  • Hindsight and LLM memory.
  • Manifest and request routing for agents.
  • OCR and AI tagging in paperless-ngx.

For now, this is the state of the lab: mostly stable, deliberately segmented, increasingly automated, and just experimental enough to remain interesting.

Also, backed up. Because I would like my future self to keep speaking to me.