Amazon EKS Upgrade Journey From 1.28 to 1.29- say hello to “Mandala”

Marcin Cuber
7 min readJan 24, 2024

--

We are now welcoming “Mandala” release. Process and considerations while upgrading EKS control-plane to version 1.29.

Overview

Welcome to this cosmic journey with Kubernetes v1.29!

As noted by the Kuberentes maintainers; this release is inspired by the beautiful art form that is Mandala- a symbol of the universe in its perfection. Our tight-knit universe of around 40 Release Team members, backed by hundreds of community contributors, has worked tirelessly to turn challenges into joy for millions worldwide.

Again, I would like to thank the community for their great support and implementations they offer. In many companies, specialists are not valued and are simply ignored which isn’t right. Hence, I would like to point this out in the public.

In the spirit of Mandala’s transformative symbolism, Kubernetes v1.29 celebrates Kubernetes project’s evolution. Like stars in the Kubernetes universe, each contributor, user, and supporter lights the way. Together, we create a universe of possibilities — one release at a time.

Previous Stories and Upgrades

If you are looking at

  • upgrading EKS from 1.27 to 1.28 check out this story
  • upgrading EKS from 1.26 to 1.27 check out this story
  • upgrading EKS from 1.25 to 1.26 check out this story
  • upgrading EKS from 1.24 to 1.25 check out this story
  • upgrading EKS from 1.23 to 1.24 check out this story
  • upgrading EKS from 1.22 to 1.23 check out this story

Prerequisites to upgrade

Before upgrading to Kubernetes v1.29 in Amazon EKS, there are some important tasks you need to complete. Theses can be easily viewed within EKS console view under “Upgrade Insights”.

Actions to take:

  • Update the API version of FlowSchema and PriorityLevelConfiguration.
    The deprecated flowcontrol.apiserver.k8s.io/v1beta2 API version of FlowSchema and PriorityLevelConfiguration are no longer served in Kubernetes v1.29. If you have manifests or client software that uses the deprecated beta API group, then you should change these before you upgrade to v1.29. To learn more, see the deprecated API migration guide.

Kuberentes 1.29- changes in this release

As always if you want to find a complete list of changes and updates in Kubernetes version 1.29, check out the Kubernetes change log. Below you can find couple of enhancements that are worth mentioning in the v1.29 release. For a complete list go here.

  • Advanced pod management feature reached beta status in Kubernetes v1.29. It introduces a sophisticated array of pod management features. #753 has graduated to beta and the SidecarContainers feature gate is enabled by default. This feature allows init containers to continuously run until pod terminates, effectively turning them into sidecar containers. This means that it solves the problem of managing long-running auxiliary processes that need to run alongside the main containers in a pod. For example, if a pod has a main application container and a logging container that collects and forwards logs from the main application, the logging container can be defined as a sidecar container. This allows the logging container to continue running and collecting logs for as long as the main application container is running, providing continuous log collection and forwarding.
  • #2799 has graduated to beta and the LegacyServiceAccountTokenCleanUp feature gate is enabled by default. This feature allows automatic cleanup of unused legacy service account tokens that are secret-based. Specifically, it labels legacy auto-generated secret-based tokens as invalid if they have not been used for a long time (1 year by default), and automatically removes them if use is not attempted for a long time after being marked as invalid (1 additional year by default). To check whether you are using unused tokens, run the following command:
kubectl get cm kube-apiserver-legacy-service-account-token-tracking -n kube-system
  • #3299 has graduated to stable and the KMSv2 and KMSv2KDF feature gates are enabled by default in Kubernetes v1.29. However, it’s important to note that KMSv2 is currently not supported in Amazon EKS.

More interesting items can be seen in the links provided below

Removed API versions and features

Nowadays, it’s not uncommon for Kubernetes Application Programming Interface (API) versions and features to be deprecated or removed when a new version of Kubernetes is released. When this happens, it’s imperative that you update all manifests and controllers to the newer versions and features listed in this section before upgrading to v1.29. Below are the top call-outs in the v1.29 release. For a complete list, refer to all Deprecations and removals in Kubernetes v1.29.

Deprecations

  • The .status.kubeProxyVersion field for Node objects is now deprecated, and the Kubernetes project is proposing to remove that field in a future release. The deprecated field is not accurate and has historically been managed by kubelet, which does not actually know the kube-proxy version, or even whether kube-proxy is running. If you’ve been using this field in client software, then stop because the information isn’t reliable and the field is now deprecated.

Upgrade your EKS with terraform

I used following providers for the upgrade:

This time upgrade of the control plane takes around ~8 minutes. I would say this is super fast and zero issues afterwards have been experienced by me. I don’t even think I noticed any unavailable from API server itself which did happen in previous upgrades. AWS are doing a great job at reducing the time it takes to upgrade EKS control plane.

I immediately upgraded worker nodes which took around ~15 minutes to join the upgraded EKS cluster. This time is dependent on how many worker nodes you have and how many pods need to be drained from old nodes.

In general full upgrade process controlplane + worker nodes took around ~22 mins. Really good time I would say.

I personally use Terraform to deploy and upgrade my EKS clusters. Here is an example of the EKS cluster resource.

resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = ["audit"]
name = local.name_prefix
role_arn = aws_iam_role.cluster.arn
version = "1.29"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
}

encryption_config {
resources = ["secrets"]
provider {
key_arn = module.kms-eks.key_arn
}
}

tags = var.tags
}

For the worker nodes I have used official AMI with id: ami-0317d520659f7a502. I didn’t notice any issues after rotating all nodes. Nodes are running following version: v1.29.0-eks-5e0fdde

Templates I use for creating EKS clusters using Terraform can be found in my Github repository reachable under https://github.com/marcincuber/eks

Upgrading Managed EKS Add-ons

In this case the change is trivial and works fine, simply update the version of the add-on. In my case, from this release I utilise kube-proxy, coreDNS and ebs-csi-driver.

Terraform resources for add-ons

resource "aws_eks_addon" "kube_proxy" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "kube-proxy"
addon_version = "v1.29.0-eksbuild.2"
resolve_conflicts = "OVERWRITE"
}
resource "aws_eks_addon" "core_dns" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "coredns"
addon_version = "v1.11.1-eksbuild.4"
resolve_conflicts = "OVERWRITE"
}
resource "aws_eks_addon" "aws_ebs_csi_driver" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "aws-ebs-csi-driver"
addon_version = "v1.26.1-eksbuild.1"
resolve_conflicts = "OVERWRITE"
}

After upgrading EKS control-plane

Remember to upgrade core deployments and daemon sets that are recommended for EKS 1.29.

  1. CoreDNS — v1.11.1-eksbuild.4
  2. Kube-proxy — 1.29.0-eksbuild.1
  3. VPC CNI — 1.16.0-eksbuild.1
  4. aws-ebs-csi-driver- v1.26.1-eksbuild.1

The above is just a recommendation from AWS. You should look at upgrading all your components to match the 1.29 Kubernetes version. They could include:

  1. load balancer controller
  2. calico-node
  3. cluster-autoscaler or Karpenter
  4. external secrets operator
  5. kube-state-metrics
  6. metrics-server
  7. csi-secrets-store
  8. calico-typha and calico-typha-horizontal-autoscaler
  9. reloader
  10. keda (event driven autoscaler)
  11. nvidia device plugin (used while utilising GPUs)

Final Result

Summary and Conclusions

Even quicker upgrade of the EKS cluster than every before. In 8mins the task to upgrade the controlplane was completed. I use Terraform to run my cluster and node upgrades so the pipeline made my life super easy.

Yet again, no significant issues. Hope you will have the same easy job to perform. All workloads worked just fine. I didn’t have to modify anything really.

If you are interested in the entire terraform setup for EKS, you can find it on my GitHub -> https://github.com/marcincuber/eks

Hope this article nicely aggregates all the important information around upgrading EKS to version 1.29 and it will help people speed up their task.

Long story short, you hate and/or you love Kubernetes but you still use it ;).

Please note that my notes relay on official AWS and Kubernetes sources.

Enjoy Kubernetes!!!

Sponsor Me

Like with any other story on Medium written by me, I performed the tasks documented. This is my own research and issues I have encountered.

Thanks for reading everybody. Marcin Cuber

--

--

Marcin Cuber
Marcin Cuber

Written by Marcin Cuber

Principal Cloud Engineer, AWS Community Builder and Solutions Architect

Responses (2)