Why Self-Service Matters
In my homelab, I was the bottleneck. Every new Kubernetes cluster meant: create YAML → find free IPs → configure node sizes → manually create Backstage catalog entries → open PR → wait for review. That’s 6 steps where 4 could be automated.
The insight: infrastructure already defined as YAML. Backstage should consume that same YAML and generate its own catalog entries.
Real-world constraint: I deploy clusters infrequently (quarterly?), so I forget the steps. The templated approach ensures consistency whether I do this once a month or once a year.
This post covers the Backstage integration for homelab infrastructure. See the architecture overview for how it fits in the broader platform.
Two Integration Points
Backstage integrates with the homelab infrastructure in two ways:
- Resource Catalog — auto-generated entities from infrastructure YAML configurations
- Software Templates — scaffolder templates for self-service provisioning
graph LR
subgraph "Configuration"
C[configurations/*.yaml]
end
subgraph "Generation"
G[generate_backstage_catalog.py]
end
subgraph "Backstage"
R[Resource Catalog]
T[Software Templates]
end
C --> G
G --> R
G --> T
Auto-Generated Catalog
Running make backstage-catalog generates Backstage Resource entities from configurations:
make backstage-catalog
# ✓ kubernetes--prod-k8s.yaml (prod-k8s, disabled)
# ✓ kubernetes--dev-k8s.yaml (dev-k8s, disabled)
# ✓ docker--prod-docker-lxc.yaml (prod-docker-lxc, disabled)
# ✓ docker--dev-docker-lxc.yaml (dev-docker-lxc, enabled)Generated Entity Example
Each generated YAML file is a Backstage Resource:
# backstage/catalog/kubernetes--prod-k8s.yaml
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: prod-k8s
description: Talos kubernetes cluster configuration for production environment
annotations:
github.com/project-slug: your-org/your-infra-repo
homelab.dev/configuration-file: configurations/kubernetes/prod-k8s.yaml
homelab.dev/resource-type: kubernetes
homelab.dev/schema-file: configuration_schemas/kubernetes.schema.yaml
backstage.io/techdocs-entity: component:terraform-module-kubernetes
homelab.dev/cluster-name: prod-k8s
homelab.dev/talos-version: v1.12.4
homelab.dev/kubernetes-version: v1.35.0
homelab.dev/vip-address: 192.168.62.20
labels:
homelab.dev/enabled: 'false'
homelab.dev/environment: prod
homelab.dev/control-plane-count: '3'
homelab.dev/worker-count: '3'
homelab.dev/size-control_plane-cpu: '4'
homelab.dev/size-control_plane-memory: '8192'
homelab.dev/size-worker-cpu: '10'
homelab.dev/size-worker-memory: '49152'
tags:
- disabled
- kubernetes
- prod
- proxmox
- talos
spec:
type: kubernetes-cluster
lifecycle: experimental
owner: group:default/homelab-admins
system: tf-infra-homelab
dependsOn:
- component:default/terraform-module-kubernetesDocker Cluster Entity
# backstage/catalog/docker--prod-docker-lxc.yaml
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: prod-docker-lxc
description: Docker configuration on lxc for production environment
annotations:
github.com/project-slug: your-org/your-infra-repo
homelab.dev/configuration-file: configurations/docker/prod-docker-lxc.yaml
homelab.dev/resource-type: docker
homelab.dev/cluster-name: prod-docker-lxc
homelab.dev/cluster-type: lxc
homelab.dev/vip-address: 192.168.61.20
labels:
homelab.dev/enabled: 'false'
homelab.dev/environment: prod
homelab.dev/worker-count: '3'
homelab.dev/size-medium-cpu: '8'
homelab.dev/size-medium-memory: '32768'
tags:
- disabled
- docker
- lxc
- prod
- proxmox
spec:
type: docker-cluster
lifecycle: experimental
owner: group:default/homelab-admins
system: tf-infra-homelab
dependsOn:
- component:default/terraform-module-dockerMetadata Extraction
The generation script extracts key metadata from configurations:
# scripts/generate_backstage_catalog.py
def extract_kubernetes_metadata(config: dict) -> dict:
"""Extract catalog-relevant metadata from a Kubernetes configuration."""
annotations = {}
labels = {}
cluster = config.get("cluster", {})
annotations["homelab.dev/cluster-name"] = cluster.get("name", "")
talos = cluster.get("talos", {}) or {}
annotations["homelab.dev/talos-version"] = talos.get("version", "")
annotations["homelab.dev/kubernetes-version"] = cluster.get("kubernetes_version", "")
cp_nodes = config.get("control_plane_nodes", {}).get("nodes", [])
worker_nodes = config.get("worker_nodes", {}).get("nodes", [])
labels["homelab.dev/control-plane-count"] = str(len(cp_nodes))
labels["homelab.dev/worker-count"] = str(len(worker_nodes))
cp_vip = config.get("control_plane_nodes", {}).get("vip", {})
if cp_vip and cp_vip.get("enabled"):
annotations["homelab.dev/vip-address"] = cp_vip.get("address", "")
sizes = config.get("node_size_configuration", {})
for size_name, size_spec in sizes.items():
labels[f"homelab.dev/size-{size_name}-cpu"] = str(size_spec.get("cpu", ""))
labels[f"homelab.dev/size-{size_name}-memory"] = str(size_spec.get("memory", ""))
return {"annotations": annotations, "labels": labels}This enables filtering in Backstage:
homelab.dev/environment=prod— production clustershomelab.dev/enabled=true— currently deployedhomelab.dev/size-worker-memory=49152— large workers
Catalog-Info Definition
The root catalog-info.yaml defines the domain, system, and components:
# Domain: homelab
apiVersion: backstage.io/v1alpha1
kind: Domain
metadata:
name: homelab
description: Self-hosted homelab infrastructure managed with Terraform and Proxmox
annotations:
backstage.io/techdocs-ref: dir:.
github.com/project-slug: your-org/your-infra-repo
spec:
owner: group:default/homelab-admins
---
# System: tf-infra-homelab
apiVersion: backstage.io/v1alpha1
kind: System
metadata:
name: tf-infra-homelab
description: Terraform-managed homelab infrastructure provisioning system
spec:
owner: group:default/homelab-admins
domain: homelab
---
# Component: terraform-module-kubernetes
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: terraform-module-kubernetes
description: Terraform module for provisioning Kubernetes (Talos) clusters on Proxmox
spec:
type: terraform-module
lifecycle: production
owner: group:default/homelab-admins
system: tf-infra-homelab
---
# Location: discovers auto-generated resources
apiVersion: backstage.io/v1alpha1
kind: Location
metadata:
name: tf-infra-homelab-resources
spec:
targets:
- ./backstage/catalog/*.yamlSoftware Templates
The Backstage scaffolder templates enable self-service provisioning:
Kubernetes Template
# backstage/templates/kubernetes/template.yaml
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: provision-kubernetes-cluster
title: Provision Kubernetes Cluster
description: Create a new Talos-based Kubernetes cluster configuration on Proxmox
tags:
- terraform
- kubernetes
- talos
- proxmox
- homelab
spec:
owner: group:default/homelab-admins
type: infrastructure
system: tf-infra-homelabTemplate Parameters
The template accepts parameters for cluster configuration:
parameters:
- title: Cluster Identity
required:
- name
- environment
properties:
name:
title: Cluster Name
type: string
pattern: "^[a-z][a-z0-9-]+$"
environment:
title: Environment
type: string
enum:
- dev
- staging
- prod
- title: Cluster Configuration
properties:
talos_version:
title: Talos Version
type: string
default: v1.12.4
kubernetes_version:
title: Kubernetes Version
type: string
default: v1.35.0
disable_default_cni:
title: Disable Default CNI
type: boolean
default: true
- title: Control Plane Nodes
properties:
cp_count:
title: Number of Control Plane Nodes
type: integer
minimum: 1
maximum: 5
default: 3
cp_cpu:
title: CPU Cores per CP Node
type: integer
default: 4
cp_memory:
title: Memory per CP Node (MB)
type: integer
default: 8192
- title: Worker Nodes
properties:
worker_count:
title: Number of Workers
type: integer
default: 3
worker_cpu:
title: CPU Cores per Worker
type: integer
default: 8
worker_memory:
title: Memory per Worker (MB)
type: integer
default: 16384Template Steps
Each template has three steps:
steps:
- id: generate
name: Generate Configuration
action: fetch:template
input:
url: ./skeleton/kubernetes
targetPath: configurations/kubernetes
values:
name: ${{ parameters.name }}
talos_version: ${{ parameters.talos_version }}
cp_count: ${{ parameters.cp_count }}
- id: generate-catalog
name: Generate Backstage Catalog Entry
action: fetch:template
input:
url: ./skeleton/backstage/catalog
targetPath: backstage/catalog
- id: publish
name: Open Pull Request
action: publish:github:pull-request
input:
repoUrl: github.com?repo=tf-infra-homelab&owner=your-org
title: "feat: provision Kubernetes cluster ${{ parameters.name }}"User Flow
In Backstage, users:
- Choose template — “Provision Kubernetes Cluster”
- Fill parameters — name, environment, node sizes
- Submit — opens a PR automatically
- Review — maintainers approve the PR
- Apply — Terraform provisions the cluster
sequenceDiagram
participant User
participant BS as Backstage
participant GH as GitHub
participant TF as Terraform
participant PVE as Proxmox
participant Talos
participant Flux
User->>BS: Create new cluster (fill form)
BS->>GH: Open PR with config files
Maintainer->>GH: Review and approve PR
GH->>TF: Merge triggers apply
TF->>PVE: Provision VMs
PVE->>Talos: Bootstrap cluster
Talos->>Flux: Install GitOps
Generation Script
The full script in scripts/generate_backstage_catalog.py:
#!/usr/bin/env python3
"""Generate Backstage catalog Resource entities from configuration YAML files."""
import yaml
from pathlib import Path
REPO_ROOT = Path(__file__).resolve().parent.parent
def load_yaml(path: Path) -> dict:
return yaml.safe_load(path.read_text()) or {}
def build_resource_entity(resource_type, environment_name, config):
"""Build a Backstage Resource entity from configuration."""
entity = {
"apiVersion": "backstage.io/v1alpha1",
"kind": "Resource",
"metadata": {
"name": config.get("name", environment_name),
"description": config.get("description", ""),
"annotations": {...},
"labels": {...},
},
"spec": {
"type": RESOURCE_TYPE_META[resource_type]["backstage_type"],
"lifecycle": "experimental",
"owner": "group:default/homelab-admins",
"system": "tf-infra-homelab",
},
}
return entity
def main():
for config_file in (REPO_ROOT / "configurations").rglob("*.yaml"):
config = load_yaml(config_file)
entity = build_resource_entity(...)
output_file.write_text(yaml.dump(entity))Run via Makefile:
backstage-catalog:
python3 scripts/generate_backstage_catalog.pyFiltering in Backstage
With the generated metadata, users can filter in Backstage:
| Filter | Use Case |
|---|---|
homelab.dev/environment=prod |
Production clusters |
homelab.dev/enabled=true |
Currently deployed |
homelab.dev/control-plane-count=3 |
Full quorum |
homelab.dev/size-worker-memory>=16384 |
Large workers |
homelab.dev/talos-version=v1.12.* |
Specific Talos version |
What Most People Get Wrong
-
“Backstage is only for Kubernetes” — It catalogs anything. My LXC, VMs, Docker clusters all have Backstage entries.
-
“Templates replace code review” — My templates generate PRs. Human review still happens. Self-service ≠ unattended.
-
“Catalog must be perfect at launch” — Start simple. The YAML-to-catalog pipeline can always regenerate.
When to Use / When NOT to Use
| Use Backstage | Use direct Terraform |
|---|---|
| Team self-service | Single admin |
| 10+ resources | <5 resources |
| Need catalog UI | CLI is enough |
What’s Next
Current areas of exploration:
- More templates — VM and LXC provisioning templates
- Approval workflows — notification to maintainers
- Status tracking — integration with Terraform Cloud state
The Backstage integration makes infrastructure self-serviceable while maintaining code review.