Amazon EKS Upgrade Journey From 1.21 to 1.22

Marcin Cuber
7 min readApr 5, 2022

--

We are “Reaching New Peaks”. Process and considerations while upgrading EKS control-plane to version 1.22.

Overview

EKS 1.22- Reaching New Peaks

Welcome to AWS EKS 1.22 upgrade guide. In version 1.20, Kubernetes deprecated Dockershim, which allowed Kubernetes to use Docker as a container runtime. Docker is still fully functional as it will be removed in 1.23. The initial plan was to remove it in version 1.22 of EKS but that didn’t happen.

EKS 1.22 is long awaited release for which people were impatiently waiting for. You can see it by looking at the Github issue itself. Nevertheless, we are here and we are “Reaching New Peaks.” The theme for the release, is described as: “in spite of the pandemic and burnout, Kubernetes 1.22 had the highest number of enhancements in any release.” This release does bring a significant number of API changes, a Kubernetes release cadence change, and many other updates which we will cover in this post.

With AWS latest release of EKS 1.22 we should make use of containerd as a runtime. Latest Amazon Linux 2 EKS optimized AMI images come with containerd support built in. The runtime can be specified for EKS nodes by using the following flag: --container-runtime containerd.This option is passed to your node through EC2 user data.

If you are looking at

  • upgrading EKS from 1.20 to 1.21 then check the previous story
  • upgrading EKS from 1.19 to 1.20 check out this story
  • upgrading EKS from 1.18 to 1.19 check out this story
  • upgrading EKS from 1.17 to 1.18 check out this story
  • upgrading EKS from 1.16 to 1.17 check out this story
  • upgrading EKS from 1.15 to 1.16 then check out story

Kubernetes 1.22 features and removals

API removals for Kubernetes v1.22

The v1.22 release stoped serving the API versions listed below. These are all beta APIs that were previously deprecated in favour of newer and more stable API versions. I provided name of stable API versions next to the deprecated ones.

  • Beta versions of the ValidatingWebhookConfiguration and MutatingWebhookConfiguration API (the admissionregistration.k8s.io/v1beta1 API versions). GA version that must be used: admissionregistration.k8s.io/v1
  • The beta CustomResourceDefinition API (apiextensions.k8s.io/v1beta1). GA version that must be used: apiextensions.k8s.io/v1
  • The beta APIService API (apiregistration.k8s.io/v1beta1). GA version that must be used: apiregistration.k8s.io/v1
  • The beta TokenReview API (authentication.k8s.io/v1beta1). GA version that must be used: authentication.k8s.io/v1
  • Beta API versions of SubjectAccessReview, LocalSubjectAccessReview, SelfSubjectAccessReview (API versions from authorization.k8s.io/v1beta1). GA version that must be used: authorization.k8s.io/v1
  • The beta CertificateSigningRequest API (certificates.k8s.io/v1beta1). GA version that must be used: certificates.k8s.io/v1
  • The beta Lease API (coordination.k8s.io/v1beta1). GA version that must be used: coordination.k8s.io/v1
  • All beta Ingress APIs (the extensions/v1beta1 and networking.k8s.io/v1beta1 API versions). GA version that must be used: networking.k8s.io/v1
  • RBAC — Change API version from rbac.authorization.k8s.io/v1beta1 to rbac.authorization.k8s.io/v1
  • PriorityClasss- Change API version from scheduling.k8s.io/v1beta1 to scheduling.k8s.io/v1
  • For CSIDriver, CSINode and StorageClass- change API version from VolumeAttachmentstorage.k8s.io/v1beta1 to storage.k8s.io/v1

I believe this is a full list of changes that you need to handle before upgrading to EKS 1.22. However, if you are looking for some more information then go to the Kubernetes documentation covers these API removals for v1.22 and explains how each of those APIs change between beta and stable.

kubectl convert

There is a plugin to kubectl that provides the kubectl convert subcommand. It's an official plugin that you can download as part of Kubernetes. See Download Kubernetes for more details.

You can use kubectl convert to update manifest files to use a different API version. For example, if you have a manifest in source control that uses the beta Ingress API, you can check that definition out, and run kubectl convert -f <manifest> --output-version <group>/<version>. You can use the kubectl convert command to automatically convert an existing manifest.

For example, to convert an older Ingress definition to networking.k8s.io/v1, you can run:

kubectl convert -f ./legacy-ingress.yaml --output-version networking.k8s.io/v1

This is yet another simplification of our upgrade path which works to our advantage.

Upgrade your EKS with terraform

This time upgrade of the control plane takes around ~10 minutes and didn’t cause any issues. I immediately upgraded worker nodes which took around 2 minutes to join the upgraded EKS cluster.

aws_eks_cluster.cluster: Still modifying... [id=staging, 9m30s elapsed]
aws_eks_cluster.cluster: Still modifying... [id=staging, 9m40s elapsed]
aws_eks_cluster.cluster: Still modifying... [id=staging, 9m50s elapsed]
aws_eks_cluster.cluster: Modifications complete after 9m51s [id=staging]

I personally use Terraform to deploy and upgrade my EKS clusters. Here is an example of the EKS cluster resource.

resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = ["audit"]
name = local.name_prefix
role_arn = aws_iam_role.cluster.arn
version = "1.22"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
}
encryption_config {
resources = ["secrets"]
provider {
key_arn = module.kms-eks.key_arn
}
}
tags = var.tags
}

For worker nodes I have used official AMI with id: ami-0f614d5610a44f63f. I didn’t notice any issues after rotating all nodes.

Templates I use for creating EKS clusters using Terraform can be found in my Github repository reachable under https://github.com/marcincuber/eks/tree/master/terraform-aws

Upgrading Managed EKS Add-ons

In this case the change is trivial and works fine, simply update the version of the add-on. In my case, I only utilise kube-proxy and coreDNS.

Terraform resources for add-ons

resource "aws_eks_addon" "kube_proxy" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "kube-proxy"
addon_version = "v1.22.6-eksbuild.1"
resolve_conflicts = "OVERWRITE"
}
resource "aws_eks_addon" "core_dns" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "coredns"
addon_version = "v1.8.7-eksbuild.1"
resolve_conflicts = "OVERWRITE"
}
resource "aws_eks_addon" "aws_ebs_csi_driver" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "aws-ebs-csi-driver"
addon_version = "v1.5.2-eksbuild.1"
resolve_conflicts = "OVERWRITE"
}

After upgrading EKS control-plane

Remember to upgrade core deployments and daemon sets that are recommended for EKS 1.22.

  1. CoreDNS — 1.8.8 (I know! eks addon only supports version 1.8.7 :))
  2. Kube-proxy — v1.22.6-eksbuild.1
  3. VPC CNI — 1.10.2-eksbuild.1
  4. aws-ebs-csi-driver- v1.5.2-eksbuild.1

The above is just a recommendation from AWS. You should look at upgrading all your components to match the 1.22 Kubernetes version. They could include:

  1. calico-node
  2. cluster-autoscaler
  3. kube-state-metrics
  4. metrics-server
  5. csi-secrets-store
  6. calico-typha and calico-typha-horizontal-autoscaler
  7. reloader

Looking ahead

There’s a setting that’s relevant if you use web-hook authentication checks. A future Kubernetes release will switch to sending TokenReview objects to web-hooks using the authentication.k8s.io/v1 API by default. At the moment, the default is to send authentication.k8s.io/v1beta1 TokenReviews to web-hooks, and that's still the default for Kubernetes v1.22. However, you can switch over to the stable API right now if you want: add --authentication-token-webhook-version=v1 to the command line options for the kube-apiserver, and check that web-hooks for authentication still work how you expected.

Once you’re happy it works OK, you can leave the --authentication-token-webhook-version=v1 option set across your control plane.

The v1.25 release that’s planned for next year will stop serving beta versions of several Kubernetes APIs that are stable right now and have been for some time. The same v1.25 release will remove PodSecurityPolicy, which is deprecated and won’t graduate to stable. See PodSecurityPolicy Deprecation: Past, Present, and Future for more information.

The official list of API removals planned for Kubernetes 1.25 is:

  • The beta CronJob API (batch/v1beta1)
  • The beta EndpointSlice API (networking.k8s.io/v1beta1)
  • The beta PodDisruptionBudget API (policy/v1beta1)
  • The beta PodSecurityPolicy API (policy/v1beta1)

Summary and Conclusions

Mega surprise with this release, upgrade of the cluster only took ~10mins. This is a huge reduction in time over previous upgrades. This upgrade time got reduced by 30mins!

I have to say that this was a nice, pleasant and relatively fast upgrade. Yet again, no significant issues. Hope you will have the same easy job to perform. All workloads worked just fine. I didn’t have to modify anything really.

If you are interested in the entire terraform setup for EKS, you can find it on my GitHub -> https://github.com/marcincuber/eks/tree/master/terraform-aws

Hope this article nicely aggregates all the important information around upgrading EKS to version 1.22 and it will help people speed up their task.

Long story short, you hate and/or you love Kubernetes but you still use it ;).

Enjoy Kubernetes!!!

Sponsor Me

Like with any other story on Medium written by me, I performed the tasks documented. This is my own research and issues I have encountered.

Thanks for reading everybody. Marcin Cuber

--

--

Marcin Cuber
Marcin Cuber

Written by Marcin Cuber

Principal Cloud Engineer, AWS Community Builder and Solutions Architect

Responses (3)