What’s New in Kubernetes 1.31: AI, ML, Security Enhancements & More
Over the past ten years, K8s has evolved from a simple container orchestration tool into the de facto standard for managing cloud-native applications. As we explore Kubernetes 1.31, it’s essential to focus on the most impactful changes that will shape how we manage and secure our clusters in the coming years.
On the whole, Kubernetes 1.31 features security and modernization improvements (e.g. AppArmor support), AI/ML enhancements (OCI support, GPU management…), more cloud-neutrality and more simplicity.
Let’s dive into the key changes, what they look like, and why they matter.
Top 3 Highlights of Kubernetes 1.31
1. Enhanced Security with AppArmor (GA)
Kubernetes 1.31 introduces stricter default settings to enhance the overall security of your clusters. One of the new features is the General Availability (GA) of AppArmor support.
AppArmor is a Linux security module that restricts the capabilities of programs by applying per-program profiles, limiting the system resources they can access, thereby enhancing system security. With new AppArmor support, you can now protect your containers by setting the appArmorProfile.type field in the container’s securityContext. This change moves away from the previous annotation-based control, providing a more streamlined and secure approach.
Why It Matters: In today’s security landscape, protecting containerized workloads is more crucial than ever — particularly for enterprises handling sensitive information or in highly regulated industries where compliance is key. With AppArmor GA, Kubernetes now offers a stable and supported method for implementing robust security policies across your cluster.
Here’s an example of the configuration:
console.log( 'Code is Poetry' );apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
containers:
- name: my-container
image: my-image
securityContext:
appArmorProfile:
type: RuntimeDefault
2. Improved Networking with nftables Backend for kube-proxy (Beta)
In Kubernetes 1.31, the addition of the nftables backend for kube-proxy represents a notable advancement in networking capabilities. This new backend option offers more performance and scalability than the traditional iptables framework. nftables has a more streamlined and efficient rule management system, which is particularly beneficial for managing complex network configurations and handling high-traffic scenarios.
Why It Matters: For organizations running large-scale clusters with high network traffic, the shift to nftables means improved performance and reduced latency. This is particularly important for companies that rely on Kubernetes for mission-critical applications where network efficiency can directly impact the user experience.
By integrating nftables, Kubernetes clusters are better prepared for future developments in Linux networking. This upgrade simplifies network rule management and provides a robust infrastructure for service discovery and load balancing.
To set up kube-proxy with nftables, use the following configuration:
apiVersion: kubeproxy.config.k8s.io/v1beta1
kind: KubeProxyConfiguration
proxyMode: "nftables"
3. Support for Multiple Service CIDRs (Beta)
One of the challenges in large Kubernetes deployments is managing IP address exhaustion. Kubernetes 1.31 introduces support for multiple Service CIDRs, which allows clusters to handle services across different IP address ranges. This feature provides greater flexibility and scalability, ensuring that even the largest clusters can manage network resources efficiently.
While this feature is currently in beta and disabled by default, it represents a significant step forward in Kubernetes’ ability to support large, complex environments.
Why It Matters: For enterprises managing large Kubernetes clusters, running out of IP addresses can be a serious issue. The ability to configure multiple Service CIDRs helps prevent this problem, ensuring that your cluster can continue to grow without hitting resource limits.
Here’s how you can configure multiple Service CIDRs in your cluster:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
serviceSubnet: "10.96.0.0/12,10.100.0.0/16"
Other Important Updates: ML/AI, Security & Performance
Kubernetes release lead Angelos Kolaitis analyzed Kubernetes 1.31 in a recent podcast interview.
“Essentially, it’s taking the complexity and the implementation-specific details out of the code of Kubernetes [… to] reconcile the state of my deployment with what I want it to be, the desired state, and pretty much leave all of the implementation, all of the extra source outside of it.”
Let’s discuss some of the specific changes contributing to this theme of simplicity, as well as enhanced support for modern ML/AI workloads.
Updates for Machine Learning & Artificial Intellligence
New DRA APIs for Better Hardware Management (Alpha)
Kubernetes 1.31 introduces new DRA (Dynamic Resource Allocation) APIs that enhance hardware management by providing more granular control over device resources. These APIs improve resource allocation efficiency and enable better handling of hardware devices, such as GPUs (essential for AI/ML development) and network cards.
Here’s an example:
apiVersion: resource.k8s.io/v1alpha1
kind: ResourceClaim
metadata:
name: ai-workload
spec:
resourceClassName: nvidia-gpu
parameters:
gpuType: "A100"
memory: "16Gi"
compute: "2"
Support for Image Volumes (Alpha)
Kubernetes 1.31 introduces alpha support for using OCI (Open Container Initiative) images as native volumes in pods. This feature allows users to mount container images directly as volumes, which is particularly beneficial for AI/ML workloads that need to handle large datasets e.g. if you are developing Large Language Models (LLMs).
To use this feature, you need to enable the ImageVolume feature gate. This functionality provides more flexible storage solutions by allowing applications to work with image data directly within the pod environment.
Security Updates
Bound Service Account Token Improvements (Beta)
In Kubernetes 1.31, the Bound Service Account Token feature has been promoted to beta. This includes enhancements like automatic token expiration and audience binding, which improve security by limiting token validity to specific purposes and timeframes.
Here’s an example of how to configure a Bound Service Account Token with audience binding and automatic expiration in Kubernetes:
apiVersion: authentication.k8s.io/v1
kind: TokenRequest
metadata:
name: my-token-request
namespace: default
spec:
audiences:
- "my-audience"
expirationSeconds: 3600 # Token expires in 1 hour
Finer-Grained Authorization Based on Selectors (Alpha)
Updates for Performance, Networking & Storage Management
Exposing Device Health Information Through Pod Status (Alpha)
Improved Ingress Connectivity
Traffic Distribution for Services (Beta)
In Kubernetes 1.31, the Traffic Distribution for Services feature has been introduced in beta. This feature allows for more granular control over how traffic is routed to services, enhancing the flexibility and efficiency of traffic management within clusters. By using the trafficDistribution field in the Service specification, administrators can fine-tune traffic routing to better balance loads and optimize resource usage.
Here’s an example configuration:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 1234
trafficDistribution:
policy: Weighted
weight: 60
Persistent Volume Last Phase Transition Time (GA)
Kubernetes 1.31 introduces a new feature that tracks the last phase transition time of Persistent Volumes (PVs). This enhancement allows administrators to monitor and troubleshoot storage issues more effectively by providing visibility into when a PV last changed its phase (e.g., from Available to Bound). With this information, it becomes easier to identify and address potential delays or issues in PV lifecycle management, improving overall storage reliability and performance in the cluster.
Here’s an example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data
status:
phase: Bound
conditions:
- type: "Bound"
status: "True"
lastTransitionTime: "2024-08-20T14:30:00Z"
Explanation:
- PersistentVolume: Defines a PV named example-pv with a capacity of 5Gi and a hostPath storage backend.
- conditions.lastTransitionTime: IIndicates the last time the PV transitioned to the Bound phase. This field provides insights into the timing of phase changes, aiding in troubleshooting and monitoring.
Changes to Reclaim Policy for PersistentVolumes (Beta)
Kubernetes 1.31 introduces updates to PersistentVolume (PV) reclaim policies, allowing administrators to better manage PV lifecycles. You can now specify policies like Retain, Delete, or Recycle to control how volumes are handled after they are released. This improvement simplifies volume management by aligning actions with specific needs.
Here’s an example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data
persistentVolumeReclaimPolicy: Retain
Kubernetes VolumeAttributesClass ModifyVolume (Beta)
Kubernetes 1.31 introduces the VolumeAttributesClass API, allowing dynamic modification of volume attributes. This improvement enables users to update volume properties on-the-fly without needing to recreate or migrate volumes, facilitating easier volume management and more flexible storage configurations.
Here’s an example:
apiVersion: storage.k8s.io/v1beta1
kind: VolumeAttributesClass
metadata:
name: fast-io
parameters:
provisionedIO: "1000"
volumeType: "io1"
Takeaways from version 1.31 of Kubernetes
Kubernetes 1.31 is a testament to the continuous innovation and dedication of the Kubernetes community. With more simplicity, better support for ML/AI workloads, enhanced security, improved ingress connectivity, scalability optimizations, new APIs, and better traffic distribution, this release is set to make managing containerized applications even more efficient and secure.
As we celebrate a decade of Kubernetes, the future looks bright with more exciting developments on the horizon!
nOps is a complete Kubernetes solution: Visibility, Management & Optimization
As teams increasingly adopt Kubernetes, they face challenges in configuring, monitoring and optimizing clusters within complex containerized environments.
Most teams manage these complexities with a combination of manual monitoring, third-party tools, and basic metrics provided by native Kubernetes dashboards — requiring them to switch between different tools and analyze data from multiple sources.
With nOps, comprehensive Kubernetes monitoring and optimization capabilities are all unified into one platform including:
- Critical metrics for pricing optimization, utilization rates, waste optimization down to the pod, node or container level
- Total visibility into hidden fees like extended support, control plane charges, ipv4, data transfer, etc
- Actionable insights on how to tune your cluster so you can take action on day 1
This isn’t just another monitoring tool; it’s a powerful all-in-one suite designed to transform how you interact with your Kubernetes environment to optimize cluster performance and costs. Let’s explore the key features.
- Container Cost Allocation: nOps processes massive amounts of your data to automatically unify and allocate your Kubernetes costs in the context of all your other cloud spending.
- Container Insights & Rightsizing. View your cost breakdown, number of clusters, and the utilization of your containers to quickly assess the scale of your clusters, where costs are coming from, and where the waste is.
- Autoscaling Optimization: nOps continually reconfigures your preferred autoscaler (Cluster Autoscaler or Karpenter) to keep your workloads optimized at all times for minimal engineering effort.
- Spot Savings: automatically run your workloads on the optimal blend of On-Demand, Savings Plans and Spot instances, with automated instance selection & real-time instance reconsideration
nOps was recently ranked #1 with five stars in G2’s cloud cost management category, and we optimize $1.5+ billion in cloud spend for our customers.
Join our customers using nOps to understand your cloud costs and leverage automation with complete confidence by booking a demo with one of our AWS experts.