← All posts
· 6 min read ·
SecurityDevOpsCI/CDInfrastructure

Kubernetes v1.33-1.35: The Security Features That Actually Matter for Production

Bound ServiceAccount tokens with node binding, recursive read-only mounts, finer-grained RBAC via label selectors, anonymous request restrictions, and what is coming in v1.35 - the Kubernetes security improvements worth shipping in 2026.

Server room infrastructure

Kubernetes releases three versions per year. Most security improvements land across multiple releases, are buried in CHANGELOG entries, and only reach “ship this to production” status when they graduate from alpha to stable. The CNCF published a definitive security feature roundup in December 2025 covering what graduated to stable and what is landing in the 2026 release cycle. Here is what matters for production clusters and why.

What Graduated to Stable in v1.33

Bound ServiceAccount Token Improvements

ServiceAccount tokens in older Kubernetes versions were long-lived JWTs without expiry and without binding to a specific node or pod. If a token was stolen - via a misconfigured application, a compromised container, or a leaked log file - it remained valid indefinitely and could be used from any cluster or node.

v1.33 completes the bound token work with two additions:

Unique token IDs. Each issued token now has a unique identifier stored in the token’s jti (JWT ID) claim. The API server can track issued tokens and, critically, revoke specific tokens without affecting others issued to the same service account. Previously, the only revocation option was deleting and recreating the service account.

Node binding. Tokens can now be bound to a specific node, not just a specific pod. A token that is node-bound is only valid when presented from that node. An attacker who exfiltrates a node-bound token and attempts to use it from another system - even another node in the same cluster - receives a rejection at authentication.

For workloads that run on fixed infrastructure (not batch jobs that move between nodes), enabling node binding substantially reduces the blast radius of credential theft.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  annotations:
    kubernetes.io/enforce-mountable-secrets: "true"

The node binding is configured at token request time, typically handled by the kubelet rather than requiring application changes.

Recursive Read-Only Mounts

v1.33 adds support for marking volumes as recursively read-only - the mount itself and all sub-mounts within it are read-only, enforced at the kernel level rather than the container runtime layer.

The security relevance: a read-only volume mount without recursiveReadOnly: Enabled can be subverted if the mounted directory contains sub-mounts that are writable. An exploit targeting write access to a “read-only” configuration volume could leverage a writable sub-mount to modify configuration or inject code.

volumeMounts:
  - name: config
    mountPath: /etc/app/config
    readOnly: true
    recursiveReadOnly: Enabled  # v1.33+

This is a one-line change in a pod spec that closes a write-path exploitation vector on any workload that mounts configuration or secret volumes.

What Graduated to Stable in v1.34

Finer-Grained Authorization via Selectors

The default RBAC model allows a principal to list or watch all resources of a given type. A principal with list on pods can enumerate all pods in the namespace, including their environment variables, image tags, and resource requests. This is a significant information disclosure in multi-tenant environments and a useful reconnaissance primitive for attackers with limited namespace access.

v1.34 adds field and label selector support to list, watch, and deletecollection operations:

rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["list", "watch"]
    resourceNames: []
    # New in v1.34: restrict to pods with specific labels
    fieldSelectors:
      - "spec.nodeName=worker-node-1"
    labelSelectors:
      - matchLabels:
          app: my-specific-app

A CI system that needs to watch pods in a deployment namespace no longer needs unrestricted list access to all pods. It gets a scoped view. An operator managing a specific application’s pods does not see the adjacent team’s pods in the same namespace.

This is a significant improvement to least-privilege RBAC that was previously impossible without namespace-level isolation.

Anonymous Request Restrictions

v1.34 tightens what unauthenticated requests can reach. By default, anonymous requests are now limited to /healthz, /readyz, and /livez endpoints. All other API paths require authentication.

The previous default was more permissive - the system:anonymous user had limited but non-trivial access, and misconfigured RBAC could inadvertently grant it more. The v1.34 tightening removes the ambiguity: if you have not explicitly granted anonymous access, no unauthenticated request reaches your API beyond health checks.

This closes a class of Kubernetes misconfigurations that have appeared in breach reports where clusters with misconfigured RBAC exposed pod metadata, secret names, or configuration data to unauthenticated internet traffic.

What Is Coming in v1.35

Pod Certificates for mTLS (Beta)

KEP-4317 enables the Kubernetes API to issue X.509 certificates per pod for workload-to-workload communication. Each pod gets a certificate from the cluster’s CA, rotated automatically by the kubelet, valid for the pod’s lifetime.

This is the building block for mTLS between services without a separate service mesh. Current service mesh implementations (Istio, Linkerd) handle certificate issuance as a sidecar concern. Pod certificates would make workload identity a first-class Kubernetes primitive - available to workloads that cannot or do not run sidecars.

For teams evaluating SPIFFE/SPIRE: Pod certificates in v1.35 use the SPIFFE SVID format (spiffe://cluster.local/ns/namespace/sa/serviceaccount), meaning they are interoperable with SPIFFE federation and can be used with AWS STS SVID exchange.

Kubelet Serving Certificate Validation Hardening (Alpha)

KEP-4872 addresses a long-standing weakness: kubelets serve their API over TLS, but the API server has historically not verified the kubelet’s serving certificate against a trusted CA in all configurations. In some configurations, an attacker performing a man-in-the-middle between the API server and a kubelet could intercept communications.

v1.35 alpha begins enforcing kubelet certificate validation by default, with a path to stable across subsequent releases. This is a breaking change for clusters running with self-signed kubelet certificates - they will need to migrate to cluster CA-signed certificates before the stable graduation.

The right time to assess your cluster’s kubelet certificate setup is now, before this graduates to stable and enforcement becomes non-optional.

The Practical Upgrade Path

For production clusters in 2026:

Enable immediately (zero config change): The v1.33 and v1.34 stable features above are either default-enabled or require small pod spec changes. Anonymous request restrictions are enforced by default in v1.34+ clusters.

Adopt on new workloads: Recursive read-only mounts and node-bound tokens are opt-in per pod spec. Add them to new workload definitions as part of your standard security baseline.

Audit RBAC for selector opportunities: The v1.34 label and field selector RBAC is only useful if you redesign existing cluster roles to take advantage of it. Run kubectl auth can-i --list --as=<service-account> for your CI and operator service accounts to identify over-permissive list/watch grants.

Assess kubelet certificates now: Before KEP-4872 graduates to stable, verify that your kubelets are using cluster CA-signed certificates. kubectl get csr surfaces pending certificate signing requests; the kubelet configuration’s tlsCertFile and tlsPrivateKeyFile should reference CA-signed certificates, not self-signed ones.

The Kubernetes security improvements in v1.33-1.35 are not dramatic announcements. They are the unglamorous work of closing gaps that the threat model identified years ago and that the engineering complexity of Kubernetes made difficult to close cleanly. They are worth shipping.

← All posts