Amazon EKS Upgrade Journey From 1.29 to 1.30- say hello to cute “Uwubernetes”

Marcin Cuber
8 min readMay 24, 2024


We are now welcoming “Uwubernetes” release. Process and considerations while upgrading EKS control-plane to version 1.30.


Welcome to AWS EKS 1.30. This is the Kubernetes v1.30 release which makes your clusters cuter!

Kubernetes is built and released by thousands of people from all over the world and all walks of life. Most contributors are not being paid to do this; we build it for fun, to solve a problem, to learn something, or for the simple love of the community. Many of us found our homes, our friends, and our careers here. The Release Team is honoured to be a part of the continued growth of Kubernetes.

Again, I would like to thank the community for their great support and implementations they offer. In many companies, specialists are not valued and are simply ignored which isn’t right. Hence, I would like to point this out in the public.

Kubernetes v1.30: Uwubernetes, the cutest release to date. The name is a portmanteau of “kubernetes” and “UwU,” an emoticon used to indicate happiness or cuteness.

Previous Stories and Upgrades

If you are looking at

  • upgrading EKS from 1.28 to 1.29 check out this story
  • upgrading EKS from 1.27 to 1.28 check out this story
  • upgrading EKS from 1.26 to 1.27 check out this story
  • upgrading EKS from 1.25 to 1.26 check out this story
  • upgrading EKS from 1.24 to 1.25 check out this story
  • upgrading EKS from 1.23 to 1.24 check out this story

Prerequisites to upgrade

Before upgrading to Kubernetes v1.30 in Amazon EKS, there are some important tasks you need to complete. Theses can be easily viewed within EKS console view under “Upgrade Insights”. In my case, I had nothing I needed to complete as I always like to keep my clusters up-to-date.

Kuberentes 1.30- changes in this release

As always if you want to find a complete list of changes and updates in Kubernetes version 1.30, check out the Kubernetes change log. Below you can find couple of enhancements that are worth mentioning in the v1.30 release. For a complete list go here.

Amazon EKS 1.30 specific changes

  • Starting with EKS 1.30, any newly created managed node groups will automatically default to using Amazon Linux 2023 (AL2023) as the node operating system. If you interested you can read more here Comparing AL2 and AL2023 in the Amazon Linux User Guide.
  • is the new label added to worker nodes. You can use Availability Zone IDs (AZ IDs) to determine the location of resources in one account relative to the resources in another account.
  • Amazon EKS no longer includes the default annotation on the gp2 StorageClass resource applied to newly created clusters. This has no impact if you are referencing this storage class by name. You must take action if you were relying on having a default StorageClass in the cluster. You should reference the StorageClass by the name gp2. Alternatively, you can deploy the Amazon EBS recommended default storage class by setting the defaultStorageClass.enabled parameter to true when installing v1.31.0 or later of the aws-ebs-csi-driver add-on. Ideally, just use gp3 volumes, make that a default and never use gp2 again.
  • The minimum required IAM policy for the Amazon EKS cluster IAM role has changed. The action ec2:DescribeAvailabilityZones is required. For more information, see Amazon EKS cluster IAM role.

Improvements that graduated to stable in Kubernetes v1.30

  • Robust VolumeManager reconstruction after kubelet restart. This is a volume manager refactoring that allows the kubelet to populate additional information about how existing volumes are mounted during the kubelet startup. This does not bring any changes for user or cluster administrators. It uses the feature process and feature gate NewVolumeManagerReconstruction to be able to fall back to the previous behaviour in case something goes wrong.
  • Prevent unauthorised volume mode conversion during volume restore. In this release the control plane always prevents unauthorised changes to volume modes when restoring a snapshot into a PersistentVolume. As a cluster administrator, you’ll need to grant permissions to the appropriate identity principals (for example: ServiceAccounts representing a storage integration) if you need to allow that kind of change at restore time.
  • Pod Scheduling Readiness. This now stable feature lets Kubernetes avoid trying to schedule a Pod that has been defined when the cluster doesn’t yet have the resources provisioned to allow actually binding that Pod to a node. This also bring in a custom control on whether a Pod can be allowed to schedule and lets you implement quota mechanisms, security controls, and more. In k8s v1.30, by specifying a Pod’s .spec.schedulingGates, you can control when a Pod is ready to be considered for scheduling.
  • Min domains in PodTopologySpread. The minDomains parameter for PodTopologySpread constraints graduates to stable this release, which allows you to define the minimum number of domains. This feature is designed to be used with Cluster Autoscaler. I am personally using Karpenter so probably not a game changer in my situation.

Here is full list of 17 enhancements promoted to Stable:

Upgrade your EKS with terraform

Just like with all my upgrades. I use Terraform as it is fast, efficient and simplifies my life. I used following providers for the upgrade:

This time upgrade of the control plane takes around ~8 minutes. I would say this is super fast and zero issues afterwards have been experienced by me. I don’t even think I noticed any unavailable from API server itself which did happen in previous upgrades. AWS are doing a great job at reducing the time it takes to upgrade EKS control plane. Almost identical to EKS 1.29 upgrade which was 8m24s, so EKS 1.30 is 4 seconds slower for reference.

I immediately upgraded worker nodes which took around ~14 minutes to join the upgraded EKS cluster. This time is dependent on how many worker nodes you have and how many pods need to be drained from old nodes.

In general full upgrade process controlplane + worker nodes took around ~22 mins. Really good time I would say.

I personally use Terraform to deploy and upgrade my EKS clusters. Here is an example of the EKS cluster resource.

resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = ["audit"]
name = local.name_prefix
role_arn = aws_iam_role.cluster.arn
version = "1.30"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"

encryption_config {
resources = ["secrets"]
provider {
key_arn = module.kms-eks.key_arn

access_config {
authentication_mode = "API_AND_CONFIG_MAP"
bootstrap_cluster_creator_admin_permissions = false

tags = var.tags

For the worker nodes I have used official AMI with id: ami-0e6a4f108467d0c54. This AMI could be London region specific which is eu-west-2. I didn’t notice any issues after rotating all nodes. Nodes are running following version: v1.30.0-eks-036c24b.

So initial EKS 1.30 release is using the first 1.30.0 and not the latest 1.30.1.

Templates I use for creating EKS clusters using Terraform can be found in my Github repository reachable under

Upgrading Managed EKS Add-ons

In this case the change is trivial and works fine, simply update the version of the add-on. In my case, from this release I utilise kube-proxy, coreDNS and ebs-csi-driver.

Terraform resources for add-ons

resource "aws_eks_addon" "kube_proxy" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "kube-proxy"
addon_version = "v1.30.0-eksbuild.3"
resolve_conflicts = "OVERWRITE"
resource "aws_eks_addon" "core_dns" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "coredns"
addon_version = "v1.11.1-eksbuild.9"
resolve_conflicts = "OVERWRITE"
resource "aws_eks_addon" "aws_ebs_csi_driver" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "aws-ebs-csi-driver"
addon_version = "v1.31.1-eksbuild.1"
resolve_conflicts = "OVERWRITE"

After upgrading EKS control-plane

Remember to upgrade core deployments and daemon sets that are recommended for EKS 1.30.

  1. CoreDNS — v1.11.1-eksbuild.9
  2. Kube-proxy — 1.30.0-eksbuild.3
  3. VPC CNI — 1.18.1-eksbuild.3
  4. aws-ebs-csi-driver- v1.31.1-eksbuild.1

The above is just a recommendation from AWS. You should look at upgrading all your components to match the 1.30 Kubernetes version. They could include:

  1. Load Balancer Controller
  2. calico-node
  3. Cluster Autoscaler or Karpenter
  4. External Secrets Operator
  5. Kube State Metrics
  6. Metrics Server
  7. csi-secrets-store
  8. calico-typha and calico-typha-horizontal-autoscaler
  9. Reloader
  10. Keda (event driven autoscaler)
  11. nvidia device plugin (used while utilising GPUs)

Final Result

> $ kubectl version                                                                                                                                                                                           [±d1fbdc7c ✓(✹)]
Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0-eks-036c24b

I like to stay up to date with my CLIs so make sure you upgrade your kubectl to match your Kubernetes cluster version.

Summary and Conclusions

Even quicker upgrade of the EKS cluster than every before. In 8mins the task to upgrade the controlplane was completed. I use Terraform to run my clusters and node upgrades so the github actions pipeline makes my life super easy.

Yet again, no significant issues. Hope you will have the same easy job to perform. All workloads worked just fine. I didn’t have to modify anything really.

If you are interested in the entire terraform setup for EKS, you can find it on my GitHub ->

Hope this article nicely aggregates all the important information around upgrading EKS to version 1.30 and it will help people speed up their task.

Long story short, you hate and/or you love Kubernetes but you still use it ;).

Please note that my notes relay on official AWS and Kubernetes sources.

Enjoy Kubernetes!!!

Sponsor Me

Like with any other story on Medium written by me, I performed the tasks documented. This is my own research and issues I have encountered.

Thanks for reading everybody. Marcin Cuber



Marcin Cuber

Principal Cloud Engineer, AWS Community Builder and Solutions Architect