<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Gitops on zharif.my</title>
        <link>https://zharif.my/tags/gitops/</link>
        <description>Recent content in Gitops on zharif.my</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-us</language>
        <lastBuildDate>Sat, 28 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://zharif.my/tags/gitops/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>Talos Kubernetes on Proxmox</title>
        <link>https://zharif.my/posts/talos-kubernetes-proxmox/</link>
        <pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate>
        
        <guid>https://zharif.my/posts/talos-kubernetes-proxmox/</guid>
        <description>&lt;img src="https://images.unsplash.com/photo-1667372459510-55b5e2087cd0?w=800&amp;h=400&amp;fit=crop" alt="Featured image of post Talos Kubernetes on Proxmox" /&gt;&lt;h2 id=&#34;why-talos&#34;&gt;Why Talos
&lt;/h2&gt;&lt;p&gt;Kubernetes the hard way (kubeadm) requires 15+ steps and manual certificate management. Talos gives you a declarative cluster that manages its own certificates, API server rotation, and upgrades — all through a single machine config.&lt;/p&gt;
&lt;p&gt;The trade-off: Talos is opinionated. You don&amp;rsquo;t get a traditional kubelet. But for infrastructure that should just work, this is a feature.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Air-gapped requirement&lt;/strong&gt;: My homelab can&amp;rsquo;t reach public registries. Every container pull route redirects through my Harbor mirror. Talos&amp;rsquo;s registry mirror config makes this seamless.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Talos automatically rotates certificates before they expire. No manual intervention needed for cluster certificate management.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id=&#34;module-capabilities&#34;&gt;Module Capabilities
&lt;/h2&gt;&lt;p&gt;The &lt;code&gt;tf-module-proxmox-talos&lt;/code&gt; module provisions a complete Talos-based Kubernetes cluster on Proxmox VE in a single Terraform apply:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Talos Image Factory&lt;/strong&gt; — generates custom ISOs with specific extensions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Machine Configuration&lt;/strong&gt; — generates Talos machine configs with networking&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ISO Upload&lt;/strong&gt; — downloads and uploads to Proxmox datastore&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Node Provisioning&lt;/strong&gt; — provisions control plane and worker VMs across host pool&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cluster Bootstrap&lt;/strong&gt; — applies machine configs and bootstraps Kubernetes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Day-0 GitOps&lt;/strong&gt; — optionally installs Flux or Argo CD during bootstrap&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Registry Mirrors&lt;/strong&gt; — configures container registry redirects&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;quick-start&#34;&gt;Quick Start
&lt;/h2&gt;&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;module &amp;#34;talos_cluster&amp;#34; {
  source  = &amp;#34;registry.example.com/namespace/tf-module-proxmox-talos/talos&amp;#34;
  version = &amp;#34;1.2.1&amp;#34;

  configuration = {
    cluster = {
      name = &amp;#34;prod-k8s&amp;#34;
      datastore = { id = &amp;#34;nas&amp;#34;, node = &amp;#34;alpha&amp;#34; }
      talos = { version = &amp;#34;v1.12.4&amp;#34; }
      kubernetes_version = &amp;#34;v1.35.0&amp;#34;
    }

    host_pool = {
      alpha = { datastore_id = &amp;#34;local-lvm&amp;#34; }
      charlie = { datastore_id = &amp;#34;local-lvm&amp;#34; }
      foxtrot = { datastore_id = &amp;#34;local-lvm&amp;#34; }
    }

    control_plane_nodes = {
      nodes = [
        { size = &amp;#34;control_plane&amp;#34;, networks = { dmz = { address = &amp;#34;192.168.62.21/24&amp;#34;, gateway = &amp;#34;192.168.62.1&amp;#34; } } }
      ]
      host_pool = [&amp;#34;alpha&amp;#34;, &amp;#34;charlie&amp;#34;, &amp;#34;foxtrot&amp;#34;]
      vip = { enabled = true, address = &amp;#34;192.168.62.20&amp;#34; }
    }

    worker_nodes = {
      nodes = [
        { size = &amp;#34;worker&amp;#34;, networks = { dmz = { address = &amp;#34;192.168.62.24/24&amp;#34;, gateway = &amp;#34;192.168.62.1&amp;#34; } } }
      ]
      host_pool = [&amp;#34;alpha&amp;#34;, &amp;#34;charlie&amp;#34;, &amp;#34;foxtrot&amp;#34;]
    }

    node_size_configuration = {
      control_plane = { cpu = 4, memory = 8192, os_disk = 128 }
      worker = { cpu = 10, memory = 49152, os_disk = 128, data_disk = 512 }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;talos-image-factory&#34;&gt;Talos Image Factory
&lt;/h2&gt;&lt;p&gt;The module uses Talos&amp;rsquo;s image factory to generate custom ISOs with specific extensions:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;# image.tf
resource &amp;#34;talos_image_factory_schematic&amp;#34; &amp;#34;this&amp;#34; {
  schematic = yamlencode({
    customization = {
      systemExtensions = {
        officialExtensions = data.talos_image_factory_extensions_versions.this.extensions_info[*].name
      }
    }
  })
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The extensions are defined in locals:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;locals {
  image = {
    platform = &amp;#34;nocloud&amp;#34;
    customizations = {
      base = [
        &amp;#34;lldp&amp;#34;,           # Network topology discovery
        &amp;#34;qemu-guest-agent&amp;#34;,  # Proxmox agent integration
        &amp;#34;util-linux-tools&amp;#34;,   # Core utilities
        &amp;#34;iscsi-tools&amp;#34;,       # NFS storage
        &amp;#34;nfs-utils&amp;#34;        # NFS mounting
      ]
    }
  }
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The generated schematic ID is used to construct the ISO URL:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;resource &amp;#34;proxmox_download_file&amp;#34; &amp;#34;talos_iso&amp;#34; {
  file_name = &amp;#34;talos-${var.configuration.cluster.name}-${var.configuration.cluster.talos.version}-${data.talos_image_factory_urls.this.schematic_id}.iso&amp;#34;
  url      = var.configuration.cluster.talos.iso_mirror != null
    ? replace(data.talos_image_factory_urls.this.urls.iso, &amp;#34;https://&amp;#34;, var.configuration.cluster.talos.iso_mirror)
    : data.talos_image_factory_urls.this.urls.iso
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This allows using mirror registries for air-gapped environments.&lt;/p&gt;
&lt;h2 id=&#34;machine-configuration&#34;&gt;Machine Configuration
&lt;/h2&gt;&lt;p&gt;Talos machine configuration is generated through the Talos provider:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;data &amp;#34;talos_machine_configuration&amp;#34; &amp;#34;configurations&amp;#34; {
  cluster_name     = var.configuration.cluster.name
  cluster_version = var.configuration.cluster.talos.version
  
  # Control plane specific config
  machine_type = &amp;#34;controlplane&amp;#34;
  
  # Network configuration
  network = {
    interfaces = [
      for idx, network in var.configuration.control_plane_nodes.nodes[0].networks : {
        interface = keys(network.networks)[0]
       DHCP    = false
        addresses = [values(network.networks)[0].address]
      }
    ]
  }
  
  # Kubernetes configuration  
  kubernetes = {
    version = var.configuration.cluster.kubernetes_version
  }
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The configuration supports:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multiple network interfaces per node&lt;/li&gt;
&lt;li&gt;Registry mirrors for all major registries&lt;/li&gt;
&lt;li&gt;Custom CNI (Cilium) configuration&lt;/li&gt;
&lt;li&gt;kube-proxy disable&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;registry-mirrors&#34;&gt;Registry Mirrors
&lt;/h3&gt;&lt;p&gt;A key feature is container registry mirror configuration:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;configuration = {
  cluster = {
    registry_mirrors = {
      &amp;#34;ghcr.io&amp;#34; = {
        endpoints = [&amp;#34;https://harbor.example.com/v2/gh&amp;#34;]
        override_path = true
      }
      &amp;#34;registry.k8s.io&amp;#34; = {
        endpoints = [&amp;#34;https://harbor.example.com/v2/k8s&amp;#34;]
        override_path = true
      }
      &amp;#34;docker.io&amp;#34; = {
        endpoints = [&amp;#34;https://harbor.example.com/v2/dh&amp;#34;]
        override_path = true
      }
      &amp;#34;quay.io&amp;#34; = {
        endpoints = [&amp;#34;https://harbor.example.com/v2/qi&amp;#34;]
        override_path = true
      }
      &amp;#34;factory.talos.dev&amp;#34; = {
        endpoints = [&amp;#34;https://harbor.example.com/v2/talos&amp;#34;]
        override_path = true
      }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;All container pulls route through my Harbor registry — essential for air-gapped homelabs.&lt;/p&gt;
&lt;h2 id=&#34;multi-network-support&#34;&gt;Multi-Network Support
&lt;/h2&gt;&lt;p&gt;The module provisions VMs with multiple network interfaces:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;network_devices = [
  for network_name, network in each.value.networks : {
    name    = network_name
    enabled = true
    bridge  = network_name
    ipv4 = {
      address = network.address
      gateway = network.gateway
    }
  }
]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;My production setup uses:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;dmz&lt;/strong&gt; — frontend network with gateway (192.168.62.0/24)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;vmbr1&lt;/strong&gt; — backend network for inter-node communication (192.168.192.0/24)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;cluster-bootstrap&#34;&gt;Cluster Bootstrap
&lt;/h2&gt;&lt;p&gt;The bootstrap sequence is orchestrated by Terraform:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;# 1. Generate machine secrets
resource &amp;#34;talos_machine_secrets&amp;#34; &amp;#34;this&amp;#34; {}

# 2. Apply control plane configuration
resource &amp;#34;talos_machine_configuration_apply&amp;#34; &amp;#34;controlplane&amp;#34; {
  for_each = { for idx, node in var.configuration.control_plane_nodes.nodes : idx =&amp;gt; node }
  
  node =  module.control_plane_virtual_machine[each.key].virtual_machine.id
  config = data.talos_machine_configuration.configurations[each.key].machine_config
  secrets = talos_machine_secrets.this.secrets
}

# 3. Bootstrap the cluster
resource &amp;#34;talos_machine_bootstrap&amp;#34; &amp;#34;this&amp;#34; {
  node = var.configuration.control_plane_nodes.nodes[0].name
  config = data.talos_machine_configuration.configurations[0].machine_config
  secrets = talos_machine_secrets.this.secrets
}&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;gitops-bootstrap&#34;&gt;GitOps Bootstrap
&lt;/h2&gt;&lt;p&gt;One of the most powerful features — Flux or ArgoCD can be bootstrapped during cluster creation:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;configuration = {
  cluster = {
    gitops = {
      provider      = &amp;#34;flux&amp;#34;  # or &amp;#34;argocd&amp;#34;
      namespace     = &amp;#34;flux-system&amp;#34;
      chart_version = &amp;#34;2.18.2&amp;#34;
      
      bootstrap = {
        repo_url              = &amp;#34;https://github.com/your-org/applications.git&amp;#34;
        revision              = &amp;#34;main&amp;#34;
        path                  = &amp;#34;src/k8s/prod&amp;#34;
        destination_namespace = &amp;#34;homelab&amp;#34;
      }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This does:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Installs Flux during Talos bootstrap (via inline manifest)&lt;/li&gt;
&lt;li&gt;Configures it to sync from the applications-homelab repository&lt;/li&gt;
&lt;li&gt;The cluster starts deploying apps immediately after boot&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class=&#34;mermaid&#34;&gt;
  sequenceDiagram
    participant TF as Terraform
    participant Talos as Talos
    participant Flux as Flux
    participant GH as GitHub
    participant K8s as Kubernetes
    
    TF-&amp;gt;&amp;gt;Talos: Apply machine config
    Talos-&amp;gt;&amp;gt;Talos: Bootstrap control plane
    Talos-&amp;gt;&amp;gt;Flux: Install Flux CRDs
    Flux-&amp;gt;&amp;gt;GH: Clone applications-homelab
    GH--&amp;gt;&amp;gt;Flux: Return repo contents
    Flux-&amp;gt;&amp;gt;K8s: Deploy applications
&lt;/pre&gt;

&lt;h2 id=&#34;cilium-integration&#34;&gt;Cilium Integration
&lt;/h2&gt;&lt;p&gt;For advanced networking, the default CNI can be replaced with bundled Cilium:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;configuration = {
  cluster = {
    # Disable Talos-managed CNI
    options = {
      disable_default_cni = true
      disable_kube_proxy = true
    }
    
    # Configure Cilium via helm values
    helm_values_override = {
      cilium = {
        operator = { replicas = 1 }
      }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The module uses the Helm provider to template the Cilium manifest:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;data &amp;#34;helm_template&amp;#34; &amp;#34;cilium&amp;#34; {
  name      = &amp;#34;cilium&amp;#34;
  repo      = &amp;#34;https://cilium.github.io/cilium&amp;#34;
  chart     = &amp;#34;cilium&amp;#34;
  version   = var.configuration.cluster.talos.version
  namespace = &amp;#34;cilium&amp;#34;
  values    = [var.configuration.cluster.helm_values_override]
}&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;node-sizing&#34;&gt;Node Sizing
&lt;/h2&gt;&lt;p&gt;The &lt;code&gt;node_size_configuration&lt;/code&gt; block keeps definitions DRY:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;node_size_configuration = {
  control_plane = {
    cpu     = 4
    memory  = 8192   # MB
    os_disk = 128     # GB
  }
  worker = {
    cpu      = 10
    memory  = 49152  # MB
    os_disk = 128
    data_disk = 512  # Extra data disk for PVs
  }
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;My prod-k8s cluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;3 control plane nodes: 4 vCPU, 8GB RAM, 128GB disk&lt;/li&gt;
&lt;li&gt;3 worker nodes: 10 vCPU, 48GB RAM, 128GB OS + 512GB data&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;host-pool-scheduling&#34;&gt;Host Pool Scheduling
&lt;/h2&gt;&lt;p&gt;VMs are distributed across Proxmox nodes via modulo arithmetic:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;# In nodes.tf
node_name = var.configuration.control_plane_nodes.host_pool[
  each.key % length(var.configuration.control_plane_nodes.host_pool)
]&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;With 3 nodes and 6 worker indices:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Worker 0 → alpha (0 % 3)&lt;/li&gt;
&lt;li&gt;Worker 1 → charlie (1 % 3)&lt;/li&gt;
&lt;li&gt;Worker 2 → foxtrot (2 % 3)&lt;/li&gt;
&lt;li&gt;Worker 3 → alpha (3 % 3)&lt;/li&gt;
&lt;li&gt;Worker 4 → charlie (4 % 3)&lt;/li&gt;
&lt;li&gt;Worker 5 → foxtrot (5 % 3)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This ensures even distribution across the cluster.&lt;/p&gt;
&lt;h2 id=&#34;outputs&#34;&gt;Outputs
&lt;/h2&gt;&lt;p&gt;The module returns cluster credentials for external use:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;output &amp;#34;cluster_credentials&amp;#34; {
  value = {
    kubeconfig    = talos_cluster_kubeconfig.this.kubeconfig
    talosconfig = talos_client_configuration.this.talosconfig
    
    # Kubeconfig file is also written locally when debug = true
    talosconfig_path = local.talosconfig_path
    kubeconfig_path = local.kubeconfig_path
  }
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Credentials are automatically stored in Bitwarden:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-hcl&#34;&gt;resource &amp;#34;bitwarden-secrets_secret&amp;#34; &amp;#34;kubernetes_kubeconfig&amp;#34; {
  key   = &amp;#34;${local.cluster_name}-kubeconfig&amp;#34;
  value = module.kubernetes[0].cluster_credentials.kubeconfig
}

resource &amp;#34;bitwarden-secrets_secret&amp;#34; &amp;#34;kubernetes_talosconfig&amp;#34; {
  key   = &amp;#34;${local.cluster_name}-talosconfig&amp;#34;
  value = module.kubernetes[0].cluster_credentials.talosconfig
}&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;my-production-configuration&#34;&gt;My Production Configuration
&lt;/h2&gt;&lt;p&gt;Here&amp;rsquo;s the actual production YAML configuration:&lt;/p&gt;
&lt;pre class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;# configurations/kubernetes/prod-k8s.yaml
cluster:
  name: prod-k8s
  datastore:
    id: nas
    node: alpha
  talos:
    version: v1.12.4
    installer_mirror: harbor.example.com/talos
    iso_mirror: https://proxy.example.com/
  kubernetes_version: v1.35.0
  registry_mirrors:
    ghcr.io: { endpoints: [https://harbor.example.com/v2/gh], override_path: true }
    registry.k8s.io: { endpoints: [https://harbor.example.com/v2/k8s], override_path: true }
    docker.io: { endpoints: [https://harbor.example.com/v2/dh], override_path: true }
    quay.io: { endpoints: [https://harbor.example.com/v2/qi], override_path: true }
    factory.talos.dev: { endpoints: [https://harbor.example.com/v2/talos], override_path: true }
  options:
    disable_default_cni: true
    disable_kube_proxy: true
    disable_scheduling_on_control_plane: true
  gitops:
    provider: flux
    bootstrap:
      repo_url: https://github.com/your-org/applications.git
      path: src/k8s/prod
      destination_namespace: homelab

host_pool:
  alpha: { datastore_id: local-lvm }
  charlie: { datastore_id: local-lvm }
  foxtrot: { datastore_id: local-lvm }

control_plane_nodes:
  nodes: [...]  # 3 control planes
  host_pool: [alpha, charlie, foxtrot]
  vip: { enabled: true, address: 192.168.62.20 }

worker_nodes:
  nodes: [...]  # 3 workers
  host_pool: [alpha, charlie, foxtrot]

node_size_configuration:
  control_plane: { cpu: 4, memory: 8192, os_disk: 128 }
  worker: { cpu: 10, memory: 49152, os_disk: 128, data_disk: 512 }&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;whats-next&#34;&gt;What&amp;rsquo;s Next
&lt;/h2&gt;&lt;p&gt;Current areas of exploration:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Multi-cluster federation&lt;/strong&gt; — connecting Talos clusters for workload distribution&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Nested Talos&lt;/strong&gt; — running Talos inside Proxmox for testing&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Observability&lt;/strong&gt; — centralized logging with Loki and Grafana&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;what-most-people-get-wrong&#34;&gt;What Most People Get Wrong
&lt;/h2&gt;&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&amp;ldquo;Talos upgrades break clusters&amp;rdquo;&lt;/strong&gt; — With proper machine configs and registry mirrors, upgrades are rolling. The immutability is a feature, not a bug.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&amp;ldquo;Air-gapped is impossible&amp;rdquo;&lt;/strong&gt; — Talos&amp;rsquo; registry mirror config + image factory handles this. Your nodes don&amp;rsquo;t need public internet access.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&amp;ldquo;No kubelet means no logging&amp;rdquo;&lt;/strong&gt; — Talos has built-in &lt;code&gt;talosctl logs&lt;/code&gt; and &lt;code&gt;talosctl metrics&lt;/code&gt;. It&amp;rsquo;s different from Kubernetes logging, not less capable.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;when-to-use--when-not-to-use&#34;&gt;When to Use / When NOT to Use
&lt;/h2&gt;&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Use Talos&lt;/th&gt;
          &lt;th&gt;Stick with kubeadm&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Want declarative infrastructure&lt;/td&gt;
          &lt;td&gt;Need full kubelet control&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Air-gapped environments&lt;/td&gt;
          &lt;td&gt;Custom init systems required&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Single apply to cluster&lt;/td&gt;
          &lt;td&gt;Manual certificate management needed&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id=&#34;use-talos&#34;&gt;Use Talos
&lt;/h2&gt;&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Use Talos&lt;/th&gt;
          &lt;th&gt;Stick with kubeadm&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Want declarative infrastructure&lt;/td&gt;
          &lt;td&gt;Need full kubelet control&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Air-gapped environments&lt;/td&gt;
          &lt;td&gt;Custom init systems required&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Single apply to cluster&lt;/td&gt;
          &lt;td&gt;Manual certificate management needed&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The foundation is solid — every cluster can be versioned, reviewed, and rolled back.&lt;/p&gt;
</description>
        </item>
        
    </channel>
</rss>
