Amazon EKS Upgrade Journey From 1.32 to 1.33- “Octarine”
We are now welcoming “Octarine” release. Process and considerations while upgrading EKS control-plane to version 1.33.
Overview
Another quarter, another EKS upgrade. This time, we’re saying goodbye to 1.32 and waving hello to Kubernetes 1.33, a.k.a. Octarine- a mystical, magical name that comes straight from Terry Pratchett’s Discworld universe. Fitting for a release that adds both powerful and peculiar features.
“Octarine is the mythical eighth colour, visible only to those attuned to the arcane- wizards, witches, and, of course, cats. And occasionally, someone who’s stared at IPtable rules for too long.”
Here’s how our upgrade journey went, what broke, what didn’t, and what made us smile like proud cluster parents. 😄
If you are looking at
- upgrading EKS from 1.31 to 1.32 check out this story
- upgrading EKS from 1.30 to 1.31 check out this story
- upgrading EKS from 1.29 to 1.30 check out this story
- upgrading EKS from 1.28 to 1.29 check out this story
- upgrading EKS from 1.27 to 1.28 check out this story
- upgrading EKS from 1.26 to 1.27 check out this story
- upgrading EKS from 1.25 to 1.26 check out this story
The official release of EKS which supports Kubernetes 1.33 can be found at https://aws.amazon.com/about-aws/whats-new/2025/01/amazon-eks-eks-distro-kubernetes-version-1-32/
EKS-Specific Considerations
Deprecation of Amazon Linux 2 AMIs
Starting with Kubernetes 1.33, Amazon EKS no longer provides pre-built optimized Amazon Linux 2 AMIs. We transitioned to Amazon Linux 2023 AMIs to ensure continued support and compatibility.
🔒 Anonymous Authentication Changes
In EKS 1.32 and above, anonymous authentication is restricted to specific health check endpoints (/healthz
, /livez
, /readyz
). We reviewed our RBAC policies to accommodate this change and prevent unauthorised access. AWS Documentation
Kubernetes v1.33: Octarine- A Deeper Dive
Release Overview
Kubernetes v1.33 comprises 64 enhancements, categorised as follows:
- 18 enhancements have graduated to Stable.
- 20 enhancements are entering Beta. (only enabled by default are available in EKS 1.33)
- 24 enhancements have entered Alpha. (not enabled in EKS)
- 2 features have been deprecated or withdrawn.
Native Sidecar Containers (GA)
Sidecar containers are now a stable feature in Kubernetes. This enhancement allows auxiliary containers (like log shippers or proxies) to have a defined lifecycle relative to the main application container, ensuring they start before and terminate after the main container.
Kubernetes implements sidecars as a special class of init containers with restartPolicy: Always
, ensuring that sidecars start before application containers, remain running throughout the pod's lifecycle, and terminate automatically after the main containers exit.
Additionally, sidecars can utilise probes (startup, readiness, liveness) to signal their operational state, and their Out-Of-Memory (OOM) score adjustments are aligned with primary containers to prevent premature termination under memory pressure.
In-Place Pod Resizing (Beta)
Workloads can be defined using APIs like Deployment, StatefulSet, etc. These describe the template for the Pods that should run, including memory and CPU resources, as well as the replica count of the number of Pods that should run. Workloads can be scaled horizontally by updating the Pod replica count, or vertically by updating the resources required in the Pods container(s). Before this enhancement, container resources defined in a Pod’s spec
were immutable, and updating any of these details within a Pod template would trigger Pod replacement.
Bound Service Account Tokens (GA)
The promotion of bound service account token volumes to GA enhances security by ensuring tokens are audience-bound and time-limited, mitigating risks associated with token leakage.
Volume Populators (GA)
Volume populators have reached GA status, allowing for the pre-population of PersistentVolumeClaims (PVCs) with data from various sources, streamlining data initialisation processes.
Dynamic Resource Allocation (Beta)
he standardized reporting of network interface data via DRA, introduced in v1.32, has graduated to beta in v1.33. This enables more native Kubernetes network integrations, simplifying the development and management of networking devices. This was covered previously in the v1.32 release announcement blog.
Additional Notable Enhancements
- Streaming List Responses: Improves API server performance by streaming large list responses, reducing memory usage and latency.
- Image Volumes (Beta): Allows mounting container images as volumes, enabling read-only access to image contents without extracting them to the filesystem.
- Fine-Grained SupplementalGroups Control (Beta): Introduces
supplementalGroupsPolicy
for more precise control over supplemental groups in containers, enhancing security, especially in volume access. - HorizontalPodAutoscaler Configurable Tolerance (Alpha): Adds the ability to configure the tolerance for scaling decisions, allowing more precise control over HPA behaviour.
Upgrade your EKS with terraform
To upgrade Amazon Elastic Kubernetes Service (EKS) from version 1.32 to 1.33 using Terraform, you need to follow a structured approach. The upgrade process involves upgrading the EKS control plane, worker nodes, and associated components like AWS load balancers, CoreDNS, and kube-proxy.
Upgrade Prerequisites
- Terraform Version: Ensure you have the latest version of Terraform installed that supports the AWS provider.
- AWS CLI: Update the AWS CLI if needed, as it will be used during the process.
- Backup: Always backup critical configurations, Kubernetes resources, and ensure that worker nodes can be recreated if needed.
Review the EKS Version Support and Release Notes
- Check Compatibility: Ensure that all components in your Kubernetes cluster are compatible with the new version (1.33). Review the EKS release notes to see the changes and breaking issues between versions 1.32 and 1.33.
- Upgrade Plan: Ensure all third-party services and Kubernetes add-ons are updated to work with the new version.
Following upgrade checks also need to be passing as required by AWS.
Important, if you use Karpenter, before the upgrade you need to have at least Karpenter 1.5 https://karpenter.sh/v1.5/. This is the first release which supports kubernetes 1.33.
Just like with all my upgrades. I use Terraform as it is fast, efficient, and simplifies my life. I used the following providers for the upgrade:
This time upgrade of the control plane takes around 8 minutes. This is slighly slower compared to the last upgrade. I experienced zero issues after performing an upgrade. I didn’t notice any unavailability from API server itself which did happen in previous upgrades.
I immediately upgraded cluster-critical worker nodes (three of them) which took around ~13 minutes to join the upgraded EKS cluster. This step is dependent on how many worker nodes you have and how many pods need to be drained from old nodes. So timings may differ in your environment if you have more nodes to rotate.
In general full upgrade process controlplane + worker nodes took around ~21 mins. Good time I would say.
I use Terraform to deploy and upgrade my EKS clusters. Here is an example of the EKS cluster resource.
resource "aws_eks_cluster" "cluster" {
name = "example-eks-cluster"
role_arn = aws_iam_role.cluster.arn
version = "1.33" # Update this from 1.32 to 1.33
...
}
For the worker nodes, official AMI ID: ami-0aa35ba0e694953cf. This AMI could be London region specific which is eu-west-2. I didn’t notice any issues after rotating all nodes. Nodes are running the following version: v1.33.0-eks-802817d.
The templates I use for creating EKS clusters using Terraform can be found in my Github repository reachable at https://github.com/marcincuber/eks
Upgrading Managed EKS Add-ons
In this case, the change is trivial and works fine, simply update the version of the add-on. In my case, from this release, I utilise kube-proxy, coreDNS, ebs-csi-driver, metrics-server, kube-state-metrics etc. You can see it in the repo mentioned paragraph above.
Terraform resources for add-ons
resource "aws_eks_addon" "kube_proxy" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "kube-proxy"
addon_version = "v1.33.0-eksbuild.2"
resolve_conflicts = "OVERWRITE"
}
resource "aws_eks_addon" "core_dns" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "coredns"
addon_version = "v1.12.1-eksbuild.2"
resolve_conflicts = "OVERWRITE"
}
After upgrading EKS control-plane
Remember to upgrade core deployments and daemon sets that are recommended for EKS 1.33.
- CoreDNS — v1.12.1-eksbuild.2
- Kube-proxy — v1.33.0-eksbuild.2
- VPC CNI — v1.19.5-eksbuild.3
The above is just a recommendation from AWS. You should look at upgrading all your components to match the Kubernetes 1.33 version. They could include:
- Load Balancer Controller
- EBS CSI Driver
- calico-node
- Cluster Autoscaler or Karpenter
- External Secrets Operator
- Kube State Metrics
- Metrics Server
- csi-secrets-store
- calico-typha and calico-typha-horizontal-autoscaler
- Reloader
- Keda (event driven autoscaler)
- Nvidia device plugin (used while utilising GPUs)
Validate the Upgrade yourself
- Check Control Plane Version: Use the AWS CLI or AWS Management Console to verify the control plane version.
aws eks describe-cluster --name example-eks-cluster --query cluster.version
- Check Node Group Versions: Ensure the worker nodes are running the correct version:
kubectl get nodes
- Check Add-on Versions: Verify that CoreDNS and kube-proxy have been upgraded:
kubectl get pods -n kube-system -o wide | grep coredns
kubectl get daemonset -n kube-system kube-proxy -o wide
Post-Upgrade Checks
- Ensure all your workloads are running correctly after the upgrade.
- Test applications and services deployed in the cluster.
- Check logs for any errors and resolve issues as needed.
Final Result
> $ kubectl version [±f0d18e1 ●]
Client Version: v1.32.1
Kustomize Version: v5.5.0
Server Version: v1.33.1-eks-7308294
I like to stay up to date with my CLIs so make sure you upgrade your kubectl to match your Kubernetes cluster version.
Summary and Conclusions
Even quicker upgrade of the EKS cluster than ever before. In 8 minutes the task to upgrade the control-plane was completed. I use Terraform to run my clusters and node upgrades so the GitHub actions pipeline makes my life super easy.
Yet again, no significant issues. Hope you will have the same easy job to perform. All workloads worked just fine. I didn’t have to modify anything.
If you are interested in the entire terraform setup for EKS, you can find it on my GitHub -> https://github.com/marcincuber/eks
Hope this article nicely aggregates all the important information around upgrading EKS to version 1.33 and it will help people speed up their tasks.
Long story short, you hate and/or you love Kubernetes but you still use it ;).
Please note that my notes rely on official AWS and Kubernetes sources.
Enjoy Kubernetes!!!
Sponsor Me
Like with any other story on Medium written by me, I performed the tasks documented. This is my own research and issues I have encountered.
Thanks for reading everybody. Marcin Cuber