Kubernetes 1.33 'Octarine' Delivers Major Upgrades for Cloud-Native and AI WorkloadsKubernetes 1.33 'Octarine' Delivers Major Upgrades for Cloud-Native and AI Workloads

The first Kubernetes release of 2025 introduces 64 enhancements — including sidecar containers, user namespaces, and advanced AI workload support — marking a new era of cloud-native "magic."

Sean Michael Kerner, Contributor

April 24, 2025

4 Min Read
Alamy

The first major update in 2025 of the open source Kubernetes container orchestration platform is now available, bringing with it some "magic" to help organizations with deployments.

Kubernetes 1.33 became generally available on April 23 and follows the Kubernetes 1.32 release that debuted at the end of 2024. Code-named "Octarine," Kubernetes 1.33 significantly increases enhancements, and several long-awaited features have graduated to stable status. With 64 enhancements — up from 44 in the previous release — Kubernetes 1.33 delivers improved security, container management, and expanded support for AI workloads.

The name "Octarine" is a reference to the magical eighth color in author Terry Pratchett's fictional Discworld novels; the release's theme reflects the project's expanding capabilities and innovation.

"Octarine is the color of magic, so it's like the mythical eighth color that's only visible to, you know, wizards, witches, and cats," Nina Polshakova, Kubernetes 1.33's release lead, told ITPro Today. "I think it highlights the kind of open source magic Kubernetes enables across the ecosystem."

Key Kubernetes Octarine Features

Among the key new features in the Kubernetes 1.33 release are the following:

  • Job success policy (KEP-3998): Specifies which pod indexes must succeed or how many pods must succeed using the new .spec.successPolicy field.

  • nftables backend for kube-proxy (KEP-3866): Significantly improves performance and scalability for Services implementation within Kubernetes clusters.

  • Topology aware routing with traffic distribution (KEP-4444 and KEP-2433): Optimizes service traffic in multi-zone clusters by prioritizing routing to endpoints within the same zone.

  • User namespaces within Linux Pods (KEP-127): Important milestone for mitigating vulnerabilities, available by default in beta with opt-in via pod.spec.hostUsers.

Sidecar Containers Finally Graduate to Stable

One of the most anticipated features making its way to stable in 1.33 is native support for sidecar containers, a pattern widely used in service mesh implementations but previously lacking formal Kubernetes support.

Polshakova pulled quote

"Sidecar containers are now graduating to stable in 1.33, and that's a very common pattern in Kubernetes, where you have your sidecar container injected next to your application container," Polshakova explained. "It can abstract things like observability, connectivity, and security functionality."

Despite being used for years in projects like Istio, native sidecar support in Kubernetes has been a long time coming. The new stable implementation ensures proper container lifecycle management.

"Now, with the new native sidecar support in 1.33 going to stable, it reduces a lot of friction of sidecar adoption in Kubernetes in general," Polshakova noted. "Kubernetes natively supports making sure your sidecar starts before and terminates after the main container, so that ensures the proper initialization and tear-down for you."

Security Enhancements: User Namespaces Now On by Default

Security improvements feature prominently in Kubernetes 1.33, with user namespaces now enabled by default, though still technically labeled as a beta feature.

This feature has been in development since 2016 and required changes across multiple projects beyond Kubernetes.

"User namespaces allow developers to isolate their user IDs inside their container from those on the host, so that reduces the attack surface if the container is compromised," Polshakova said. "In multi-tenant environments, this is a really big win because in a shared cluster where you have different teams or organizations deploying workloads, you can have user namespaces enforce the strong isolation boundaries between multiple tenants."

Nftables Support Graduates to Stable

Related:4 Reasons to Run Kubernetes On-Prem

Another significant feature graduating to stable is the nftables-based kube-proxy backend, offering performance improvements over the traditional iptables implementation. Iptables for decades was the standard Linux packet and firewall technology, but it has been superseded by nftables.

"Nftables was introduced in 2014 in upstream Linux, and since then, most upstream development kind of moved there," Polshakova said. "They offer some improvement in terms of performance and scalability over iptables. You can do incremental changes to the rule set in nftables, where you can't with iptables."

Polshakova added that this change better aligns the Kubernetes ecosystem with the direction of upstream and modern Linux networking principles.

Dynamic Resource Allocation Features for AI Workloads

Another notable advancement in Kubernetes 1.33 is the enhancement of dynamic resource allocation (DRA) technology.

DRA is a Kubernetes feature that handles resource allocation beyond traditional CPU and memory. These features help allocate specialized hardware like GPUs, TPUs, and FPGAs.

Polshakova noted that the DRA features reflect the community excitement about new workload types and indicate how Kubernetes is expanding to support more complex computational needs, especially in AI. The features matter because they enable more flexible hardware resource management, allowing organizations to run increasingly sophisticated AI and machine learning workloads more efficiently within Kubernetes clusters.

"This is the first release where we had six new DRA features land," she said. "A lot of them are alpha and beta, so they're not stable, but they do indicate that we are now handling more new workload types for AI."

Another AI-related enhancement is the new job success policy feature, which allows greater flexibility in determining when a job has successfully completed.

"Current behavior means that you need all indexes in the job to succeed to mark that job as completed," Polshakova explained. "Now the difference is users can specify which pod indexes have to succeed, and that's useful for PyTorch workloads specifically."

About the Author

Contributor

Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He consults to industry and media organizations on technology issues.

https://www.linkedin.com/in/seanmkerner/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like