Amazon EKS Upgrade Journey From 1.31 to 1.32- hi to “Penelope”
We are now welcoming “Penelope” release. Process and considerations while upgrading EKS control-plane to version 1.32.
Overview
Amazon EKS (Elastic Kubernetes Service) continues to evolve, bringing users the cutting-edge features of Kubernetes while maintaining the robustness and flexibility required for running production workloads. With each upgrade, users gain access to improvements, optimisations, and new features, and the release of EKS 1.32 is no exception. As Amazon EKS keeps pace with upstream Kubernetes versions, the upgrade process involves not just new functionality but also potential breaking changes that must be handled with care.
As per official notes: “If Kubernetes is Ancient Greek for ‘pilot’, this release starts from that origin and reflects on the last 10 years of Kubernetes and accomplishments: each release cycle is a journey, and just like Penelope, in ‘The Odyssey’, weaved for 10 years-each night removing parts of what she had done during the day so does each release add new features and removes others”.
If you are looking at
- upgrading EKS from 1.30 to 1.31 check out this story
- upgrading EKS from 1.29 to 1.30 check out this story
- upgrading EKS from 1.28 to 1.29 check out this story
- upgrading EKS from 1.27 to 1.28 check out this story
- upgrading EKS from 1.26 to 1.27 check out this story
- upgrading EKS from 1.25 to 1.26 check out this story
The official release of EKS which supports Kubernetes 1.32 can be found at https://aws.amazon.com/about-aws/whats-new/2025/01/amazon-eks-eks-distro-kubernetes-version-1-32/
EKS 1.32 improvement
Anonymous authentication changes
Starting with Amazon EKS 1.32, anonymous authentication is restricted to the following API server health check endpoints:
/healthz
/livez
/readyz
Requests to any other endpoint using the system:unauthenticated
user will receive a 401 Unauthorized
HTTP response. This security enhancement helps prevent unintended cluster access that could occur due to misconfigured RBAC policies.
Graduations to Stable
Here is the list of the features that became stable (GA) in 1.32 release.
- The Memory Manager feature has graduated to GA, it provides more efficient and predictable memory allocation for containerized applications, particularly beneficial for workloads with specific memory requirements.
- PersistentVolumeClaims (PVCs) created by StatefulSets now include automatic cleanup functionality. When PVCs are no longer needed, they will be automatically deleted while maintaining data persistence during StatefulSet updates and node maintenance operations. This feature simplifies storage management and helps prevent orphaned PVCs in your cluster. This is an extremely useful feature especially when it comes to cost savings.
- Custom Resource Field Selector functionality has been introduced, allowing developers to add field selectors to custom resources. This feature provides the same filtering capabilities available for built-in Kubernetes objects to custom resources, enabling more precise and efficient resource filtering and promoting better API design practices.
- Structured Authorization Configuration
- Bound service account token improvements
- Retry Generate Name
- Make Kubernetes aware of the LoadBalancer behaviour
- Field
status.hostIPs
added for Pod - Custom profile in kubectl debug
- Support to size memory backed volumes
- Improved multi-numa alignment in Topology Manager
- Add job creation timestamp to job annotations
- Add Pod Index Label for StatefulSets and Indexed Jobs
Deprecations and removals
EKS nodes and Amazon Linux 2 (important)
Kubernetes version 1.32 is the last version for which Amazon EKS will release Amazon Linux 2 (AL2) AMIs. From v1.33 onwards, Amazon EKS will continue to release Amazon Linux 2023 (AL2023) and Bottlerocket-based AMIs.
Withdrawal of the old DRA implementation
The old enhancement #3063 introduced Dynamic Resource Allocation (DRA) in Kubernetes 1.26. However, in Kubernetes v1.32, this approach to DRA will be significantly changed. See KEP #4381 as the up-to-date base functionality.
API removals
There is one API removal in Kubernetes v1.32:
- The
flowcontrol.apiserver.k8s.io/v1beta3
API version of FlowSchema and PriorityLevelConfiguration has been removed. To prepare for this, you can edit your existing manifests and rewrite client software to use theflowcontrol.apiserver.k8s.io/v1 API
version, available since v1.29. All existing persisted objects are accessible via the new API. Notable changes in flowcontrol.apiserver.k8s.io/v1beta3 include that the PriorityLevelConfigurationspec.limited.nominalConcurrencyShares
field only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.
Upgrade your EKS with terraform
To upgrade Amazon Elastic Kubernetes Service (EKS) from version 1.31 to 1.32 using Terraform, you need to follow a structured approach. The upgrade process involves upgrading the EKS control plane, worker nodes, and associated components like AWS load balancers, CoreDNS, and kube-proxy.
Karpenter node- IMPORTANT
If you are using Karpenter for node provisioning, make sure to upgrade your Karpenter controller to at least version 1.1.2 as previous versions will throw errors reported in github issue.
In my cluster I am running Karpenter 1.2.0 and everything runs smoothly with EKS 1.32.
Upgrade Prerequisites
- Terraform Version: Ensure you have the latest version of Terraform installed that supports the AWS provider.
- AWS CLI: Update the AWS CLI if needed, as it will be used during the process.
- Backup: Always backup critical configurations, Kubernetes resources, and ensure that worker nodes can be recreated if needed.
Review the EKS Version Support and Release Notes
- Check Compatibility: Ensure that all components in your Kubernetes cluster are compatible with the new version (1.32). Review the EKS release notes to see the changes and breaking issues between versions 1.31 and 1.32.
- Upgrade Plan: Ensure all third-party services and Kubernetes add-ons are updated to work with the new version.
Just like with all my upgrades. I use Terraform as it is fast, efficient, and simplifies my life. I used the following providers for the upgrade:
This time upgrade of the control plane takes less than 7 minutes. This is even faster than any of the previous upgrades. I experienced zero issues after performing an upgrade. I don’t even think I noticed any unavailable from API server itself which did happen in previous upgrades. AWS is doing a great job at reducing the time it takes to upgrade EKS control plane. This upgrade was a minute faster than the previous one which shows good improvements.
I immediately upgraded cluster-critical worker nodes (three of them) which took around ~11 minutes to join the upgraded EKS cluster. This step is dependent on how many worker nodes you have and how many pods need to be drained from old nodes. So timings may differ in your environment if you have more nodes to rotate.
In general full upgrade process controlplane + worker nodes took around ~18 mins. Good time I would say.
I use Terraform to deploy and upgrade my EKS clusters. Here is an example of the EKS cluster resource.
resource "aws_eks_cluster" "cluster" {
name = "example-eks-cluster"
role_arn = aws_iam_role.cluster.arn
version = "1.32" # Update this from 1.31 to 1.32
...
}
For the worker nodes, official AMI ID: ami-0c9702554bfb1dc0a. This AMI could be London region specific which is eu-west-2. I didn’t notice any issues after rotating all nodes. Nodes are running the following version: v1.32.0-eks-aeac579.
The templates I use for creating EKS clusters using Terraform can be found in my Github repository reachable at https://github.com/marcincuber/eks
Upgrading Managed EKS Add-ons
In this case, the change is trivial and works fine, simply update the version of the add-on. In my case, from this release, I utilise kube-proxy, coreDNS and ebs-csi-driver.
Terraform resources for add-ons
resource "aws_eks_addon" "kube_proxy" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "kube-proxy"
addon_version = "v1.32.0-eksbuild.2"
resolve_conflicts = "OVERWRITE"
}
resource "aws_eks_addon" "core_dns" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "coredns"
addon_version = "v1.11.4-eksbuild.2"
resolve_conflicts = "OVERWRITE"
}
After upgrading EKS control-plane
Remember to upgrade core deployments and daemon sets that are recommended for EKS 1.32.
- CoreDNS — v1.11.4-eksbuild.2
- Kube-proxy — v1.32.0-eksbuild.2
- VPC CNI — v1.19.2-eksbuild.1
The above is just a recommendation from AWS. You should look at upgrading all your components to match the Kubernetes 1.31 version. They could include:
- Load Balancer Controller
- EBS CSI Driver
- calico-node
- Cluster Autoscaler or Karpenter
- External Secrets Operator
- Kube State Metrics
- Metrics Server
- csi-secrets-store
- calico-typha and calico-typha-horizontal-autoscaler
- Reloader
- Keda (event driven autoscaler)
- Nvidia device plugin (used while utilising GPUs)
Validate the Upgrade yourself
- Check Control Plane Version: Use the AWS CLI or AWS Management Console to verify the control plane version.
aws eks describe-cluster --name example-eks-cluster --query cluster.version
- Check Node Group Versions: Ensure the worker nodes are running the correct version:
kubectl get nodes
- Check Add-on Versions: Verify that CoreDNS and kube-proxy have been upgraded:
kubectl get pods -n kube-system -o wide | grep coredns
kubectl get daemonset -n kube-system kube-proxy -o wide
Post-Upgrade Checks
- Ensure all your workloads are running correctly after the upgrade.
- Test applications and services deployed in the cluster.
- Check logs for any errors and resolve issues as needed.
Final Result
> $ kubectl version
Client Version: v1.32.1
Kustomize Version: v5.5.0
Server Version: v1.32.0-eks-5ca49cb
I like to stay up to date with my CLIs so make sure you upgrade your kubectl to match your Kubernetes cluster version.
Summary and Conclusions
Even quicker upgrade of the EKS cluster than ever before. In 8 minutes the task to upgrade the control-plane was completed. I use Terraform to run my clusters and node upgrades so the GitHub actions pipeline makes my life super easy.
Yet again, no significant issues. Hope you will have the same easy job to perform. All workloads worked just fine. I didn’t have to modify anything.
If you are interested in the entire terraform setup for EKS, you can find it on my GitHub -> https://github.com/marcincuber/eks
Hope this article nicely aggregates all the important information around upgrading EKS to version 1.32 and it will help people speed up their tasks.
Long story short, you hate and/or you love Kubernetes but you still use it ;).
Please note that my notes rely on official AWS and Kubernetes sources.
Enjoy Kubernetes!!!
Sponsor Me
Like with any other story on Medium written by me, I performed the tasks documented. This is my own research and issues I have encountered.
Thanks for reading everybody. Marcin Cuber