Create a Kind Kubernetes Cluster
with Cilium L2 Load Balancing
using Terraform

September 18, 2025

Preface

Welcome to the inaugural post of our new blog series dedicated to a deep dive into the world of modern infrastructure. In this series, we will explore the intricate and powerful landscape of technologies like Kubernetes and Istio, moving beyond the surface-level tutorials to uncover the nuanced technical details that drive these systems. Whether you are a seasoned DevOps engineer, a system architect, or a developer keen on understanding the backbone of your applications, this series will provide valuable insights, practical guidance, and a comprehensive understanding of the tools that are shaping the future of cloud-native computing. Join us as we embark on this journey to demystify complex concepts and empower you with the knowledge to build, manage, and scale resilient and efficient infrastructure.

Introduction

If you're working with a technology stack that uses Kubernetes as its core platform, you know how crucial it is to have a reliable and reproducible way to manage your clusters. This blog post is the first step in a series of discussions, and we'll start by building a foundation that you can use again and again. We'll walk through a simple, yet effective, process for creating a Kubernetes cluster on your local machine using Terraform. This setup won't just be for a one-time experiment; it's designed to be the consistent base for all our future conversations and explorations whenever we need a Kubernetes environment.

Kubernetes Cluster Setup

The base Kubernetes cluster setup will be using Kind and Cilium with Cilium L2 load balancing with Terraform.

Prerequisites

Tools required:
  1. TFSwitch
  2. Terraform CLI

Kind Cluster

00-providers.tf
terraform { # List of Terraform versions can be found at https://releases.hashicorp.com/terraform/ required_version = "~> 1.13.2" required_providers { kind = { # techyx/kind provider can be found at https://registry.terraform.io/providers/tehcyx/kind/ source = "tehcyx/kind" version = "~> 0.9.0" } } } provider "kind" {}

01-kind-cluster.tf
resource "kind_cluster" "kind" { name = var.cluster_name wait_for_ready = true kind_config { kind = "Cluster" api_version = "kind.x-k8s.io/v1alpha4" networking { disable_default_cni = true kube_proxy_mode = "none" api_server_port = 6443 } node { role = "control-plane" image = var.kindest_image kubeadm_config_patches = [ <<-EOT kind: KubeletConfiguration serverTLSBootstrap: true EOT, <<-EOT kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-labels: "ingress-ready=true" EOT ] } dynamic "node" { for_each = var.worker_nodes content { role = node.value image = var.kindest_image } } } provisioner "local-exec" { command = "for kubeletcsr in $(kubectl -n kube-system get csr | grep kubernetes.io/kubelet-serving | awk '{ print $1 }'); do kubectl certificate approve $kubeletcsr; done" } }
  • Set disable_default_cni (L9) to true to use Cilium as the CNI
  • Set kube_proxy_mode to "none" (L10) to use Cilium as Kube Proxy replacement
  • Set serverTLSBootstrap to true (L21) certificate not containing IP SANS issue. Reference: https://www.zeng.dev/post/2023-kubeadm-enable-kubelet-serving-certs/
  • Add node-labels: "ingress-ready=true" (L27) for Cilium L2 load balancing

99-variables.tf
variable "cluster_name" { description = "Kind cluster name" type = string default = "kind" } variable "worker_nodes" { description = "Type of Kind nodes" type = list(string) default = ["worker", "worker", "worker"] } variable "kindest_image" { # kindest/node container image can be found at https://hub.docker.com/r/kindest/node/ description = "kindest/node image" type = string default = "kindest/node:v1.34.0@sha256:7416a61b42b1662ca6ca89f02028ac133a309a2a30ba309614e8ec94d976dc5a" }
  • Feel free to reduce the workers by changing the number of "worker" in the "worker_nodes" list (L10)

Cilium

00-providers.tf
terraform { # List of Terraform versions can be found at https://releases.hashicorp.com/terraform/ required_version = "~> 1.13.2" required_providers { # Hashicorp Kubernetes provider can be found at https://registry.terraform.io/providers/hashicorp/kubernetes kubernetes = { source = "hashicorp/kubernetes" version = "~> 2.38.0" } # Hashicorp Helm provider can be found at https://registry.terraform.io/providers/hashicorp/helm helm = { source = "hashicorp/helm" version = "~> 3.0.2" } } } provider "kubernetes" { config_path = "~/.kube/config" config_context = var.cluster_context } provider "helm" { kubernetes = { config_path = "~/.kube/config" config_context = var.cluster_context } }

01-cilium.tf
resource "helm_release" "cilium" { name = var.cilium_helm.chart repository = var.cilium_helm.url chart = var.cilium_helm.chart version = var.cilium_helm.version namespace = var.cilium_namespace timeout = 1800 values = [ "${file("./values.yaml")}" ] }

values.yaml
k8sServiceHost: kind-control-plane k8sServicePort: 6443 kubeProxyReplacement: "true" hostServices: enabled: false externalIPs: enabled: true nodePort: enabled: true hostPort: enabled: true image: pullPolicy: IfNotPresent operator: replicas: 1 cni: exclusive: false ipam: mode: "kubernetes" autoDirectNodeRoutes: true devices: "eth0" routingMode: "native" ipv4NativeRoutingCIDR: "10.244.0.0/16" socketLB: hostNamespaceOnly: true l2announcements: enabled: true leaseDuration: "3s" leaseRenewDeadline: "1s" leaseRetryPeriod: "500ms" hubble: enabled: false relay: enabled: false ui: enabled: false

99-variables.tf
variable "cluster_context" { description = "Kubernetes cluster to install Cilium" type = string default = "kind-kind" } # cilium helm chart version from https://github.com/cilium/charts variable "cilium_helm" { description = "Cilium helm chart" type = map(string) default = { type = "helm" url = "https://helm.cilium.io/" chart = "cilium" version = "1.18.1" } } variable "cilium_namespace" { description = "Kubernetes namespace to install Cilium" type = string default = "kube-system" }

Cilium L2 Load Balancing

00-providers.tf
terraform { # List of Terraform versions can be found at https://releases.hashicorp.com/terraform/ required_version = "~> 1.13.2" required_providers { # Hashicorp Kubernetes provider can be found at https://registry.terraform.io/providers/hashicorp/kubernetes kubernetes = { source = "hashicorp/kubernetes" version = "~> 2.38.0" } } } provider "kubernetes" { config_path = "~/.kube/config" config_context = var.cluster_context }

01-cilium-lb.tf
resource "kubernetes_manifest" "cilium_lb_pool" { manifest = { apiVersion = "cilium.io/v2alpha1" kind = "CiliumLoadBalancerIPPool" metadata = { name = "lb-pool-1" } spec = { blocks = [ { cidr = "172.18.250.0/24" } ] } } } resource "kubernetes_manifest" "cilium_l2_announcement_policy" { manifest = { apiVersion = "cilium.io/v2alpha1" kind = "CiliumL2AnnouncementPolicy" metadata = { name = "announcement-policy-1" } spec = { externalIPs = false loadBalancerIPs = true interfaces = ["^eth[0-9]+"] nodeSelector = { matchExpressions = [ { key = "node-role.kubernetes.io/control-plane" operator = "DoesNotExist" } ] } } } }

99-variables.tf
variable "cluster_context" { description = "Kubernetes cluster to install Cilium L2 configurations" type = string default = "kind-kind" }