Amazon EKS Upgrade Journey From 1.20 to 1.21

Marcin Cuber
9 min readAug 2, 2021

--

Welcome to Containerd and goodbye to Dockerd. Process and considerations while upgrading EKS control-plane to version 1.21

Overview

Welcome to AWS EKS 1.21 upgrade guide. In version 1.20, Kubernetes deprecated Dockershim, which allowed Kubernetes to use Docker as a container runtime. Docker is still fully functional, but we should be migrating away from it asap. Essentially, Kubernetes users need to migrate to a different container runtime before support is removed in a future Kubernetes release.

With AWS latest release of EKS 1.21 we can finally make use of containerd as a runtime. Latest Amazon Linux 2 EKS optimized AMI images come with containerd support built in. The default runtime for 1.21 will still be Docker, and you can opt-in to containerd runtime by adding a --container-runtime containerd option to your user data.

If you are looking at

  • upgrading EKS from 1.19 to 1.20 then check out previous story
  • upgrading EKS from 1.18 to 1.19 check out this story
  • upgrading EKS from 1.17 to 1.18 check out this story
  • upgrading EKS from 1.16 to 1.17 check out this story
  • upgrading EKS from 1.15 to 1.16 then check out story

It is important to note that Kubernetes project owners/community has recently switched to a cadence of three releases a year from four. This change coupled with the continued maturation of the project is going to lead to much larger feature packed releases. For that reason we are expecting EKS to support a lot more features and with a much faster delivery.

Kubernetes 1.21 features

Highlights

  • Dual-stack networking support (IPv4 and IPv6 addresses) on pods, services, and nodes has reached beta status. However, Amazon EKS and the Amazon VPC CNI do not currently support dual stack networking. For now AWS team is focusing on enabling IPv6 support in a single interface configuration so pods can take advantage of IPv6 addressing but still route to IPv4 endpoints.
  • The Amazon EKS Optimized Amazon Linux 2 AMI now contains a bootstrap flag to enable the containerd runtime as a Docker alternative. This flag allows preparation for the removal of Docker as a supported runtime in the next Kubernetes release.
  • Managed node groups support for Cluster Autoscaler priority expander.
  • Managed node groups in EKS 1.21 will name Auto Scaling Groups in the form of eks-<managed-node-group-name>-<uuid>. This means you can now take advantage of the Cluster Autoscaler priority expander with your managed node groups. A common use case is to prefer scaling spot node groups over on-demand.

The following Kubernetes features are now supported in Kubernetes 1.21 Amazon EKS clusters:

  • Better and improved CronJobs! CronJobs (previously ScheduledJobs) has now graduated to stable status. This allows users to perform regularly scheduled actions such as backups and report generation. The TTLAfterFinished option was enabled in EKS 1.20 and the feature graduates to beta in 1.21 upstream. This means if you have a CronJob that creates pods you can specify TTLAfterFinished which will delete the pods so you don’t have completed pods left in etcd. This is especially useful for AWS Fargate users because pods create Kubernetes nodes and with this setting they’re automatically deleted after the job completes.
  • Immutable Secrets and ConfigMaps have now graduated to stable status. A new, immutable field has been added to these objects to reject changes. This rejection protects the cluster from updates that would unintentionally break the applications. Because these resources are immutable, kubelet does not watch or poll for changes, reducing kube-apiserver load and improving scalability and performance.
  • Graceful Node Shutdown has now graduated to beta status. This allows the kubelet to be aware of node shutdown, and to gracefully terminate that node’s pods. Prior to this feature, when a node shut down, its pods did not follow the expected termination lifecycle which introduced workload problems. Now, the kubelet can detect imminent system shutdown through systemd, and inform running pods so they terminate gracefully. This is helpful for node termination requests that originate outside of Kubernetes (eg someone running sudo poweroff) and may not go through the full cordon and drain lifecycle via a lifecycle hook. Managed Node Groups already have lifecycle hooks for node terminations that are requested through the Kubernetes API such as Cluster Autoscaler scale down events.
  • Immutable Secrets and ConfigMaps improvements. It is now possible to mark ConfigMaps and Secrets as immutable in the Kubernetes API. This is useful when trying to migrate to new applications which require ConfigMap changes. By using immutable ConfigMaps and Secrets you can guarantee they are not changed and each deployment version has a config that matches. This can prevent outages and errors, but it currently requires manual tracking for which applications will use which version of each immutable ConfigMap and Secret.
  • Pods with multiple containers can now use thekubectl.kubernetes.io/default-container annotation to have a container preselected for kubectl commands.
  • PodSecurityPolicy has been deprecated. PodSecurityPolicy will be functional for several more releases, following Kubernetes deprecation guidelines. To learn more, read PodSecurityPolicy Deprecation: Past, Present, and Future and the AWS blog. I am not sure how to feel about this but we shall wait and see what is going to replace PodSecurityPolicy.
  • TopologyKeys Deprecation. TopologyKeys was an alpha feature in Kubernetes and never available in EKS. It is being replaced with a new alpha feature called Topology Aware Hints in 1.21. It will be available in EKS in a future release when the feature graduates out of alpha status. TopologyKeys is a good example for why EKS does not enable alpha features in the API server. Alpha features are not guaranteed to be part of the API long term and don’t have long deprecation cycles like stable or beta features.

For more information about Kubernetes 1.21, see the official release announcement and for the complete Kubernetes 1.21 changelog, see https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md.

Containerd and goodbye to Dockerd.

Docker as an underlying runtime has been deprecated in favor of runtimes that use the Container Runtime Interface (CRI) created for Kubernetes. Docker-produced images will continue to work in your cluster with all runtimes, as they always have.

If you’re an end-user of Kubernetes, not a whole lot will be changing for you. This doesn’t mean the death of Docker, and it doesn’t mean you can’t, or shouldn’t, use Docker as a development tool anymore. Docker is still a useful tool for building containers, and the images that result from running docker build can still run in your Kubernetes cluster.

Enabling containerd with AWS EKS worker nodes

All that is required to switch from dockerd to containerd is a flag

“— container-runtime containerd

In my case userdata used by EKS worker nodes looks as follows:

UserData:
Fn::Base64:
!Sub |
#!/bin/bash
set -o xtrace
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
if [[ "${RestrictMetadata}" == "no" ]];
then
yum install -y iptables-services
iptables --insert FORWARD 1 --in-interface eni+ --destination 169.254.169.254/32 --jump DROP
iptables-save | tee /etc/sysconfig/iptables
systemctl enable --now iptables
fi
instance_id=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
export AWS_DEFAULT_REGION=${AWS::Region}
ilc=$(aws ec2 describe-instances --instance-ids $instance_id --query 'Reservations[0].Instances[0].InstanceLifecycle' --output text)
if [ "$ilc" == "spot" ]; then
/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArgumentsForSpot} --container-runtime containerd
else
/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArgumentsForOnDemand} --container-runtime containerd
fi
/opt/aws/bin/cfn-signal --exit-code $? \
--stack ${AWS::StackName} \
--resource NodeGroup \
--region ${AWS::Region}

Relatively simple change and works without any issues. Your worker nodes will get rotated and new ones will start up with containerd being enabled.

If you would like to see a full template, please head to my EKS Github repo.

Upgrade your EKS with terraform

This time upgrade of the control plane takes around ~42 minutes and didn’t cause any issues. I have noticed that the control plane wasn’t available immediately so upgraded worker nodes took around 2 minutes to join the upgraded EKS cluster.

I personally use Terraform to deploy and upgrade my EKS clusters. Here is an example of the EKS cluster resource.

resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = ["audit"]
name = local.name_prefix
role_arn = aws_iam_role.cluster.arn
version = "1.21"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
} encryption_config {
resources = ["secrets"]
provider {
key_arn = module.kms-eks.key_arn
}
} tags = var.tags
}

Templates I use for creating EKS clusters using Terraform can be found in my Github repository reachable under https://github.com/marcincuber/eks/tree/master/terraform-aws

Upgrading Managed EKS Add-ons

In this case the change is trivial and works fine, simply update the version of the add-on. In my case, I only utilise kube-proxy and coreDNS.

Terraform resources for add-ons

resource "aws_eks_addon" "kube_proxy" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "kube-proxy"
addon_version = "v1.21.2-eksbuild.2"
resolve_conflicts = "OVERWRITE"
}
resource "aws_eks_addon" "core_dns" {
cluster_name = aws_eks_cluster.cluster[0].name
addon_name = "coredns"
addon_version = "v1.8.4-eksbuild.1"
resolve_conflicts = "OVERWRITE"
}

After upgrading EKS control-plane

Remember to upgrade core deployments and daemon sets that are recommended for EKS 1.21.

  1. CoreDNS — 1.8.4
  2. Kube-proxy — 1.21.2-eksbuild.2
  3. VPC CNI — 1.9.x-eksbuild.y

The above is just a recommendation from AWS. You should look at upgrading all your components to match the 1.21 Kubernetes version. They could include:

  1. calico-node
  2. cluster-autoscaler
  3. kube-state-metrics
  4. metrics-server
  5. csi-secrets-store
  6. calico-typha and calico-typha-horizontal-autoscaler
  7. reloader

AWS China issue

As of writing this article, AWS China Beijing region doesn’t seem to be working with new worker nodes which have containerd runtime enabled. Looks like AWS documentation is again lacking and China region caused me problems. Note that you can upgrade to EKS 1.21 but containerd runtime will result in new nodes not joining the cluster.

Containerd system problem

[root@ip-10-70-26-148 ec2-user]# systemctl status containerd -l
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2021-08-04 07:18:44 UTC; 6min ago
Docs: https://containerd.io
Main PID: 3417 (containerd)
Tasks: 15
Memory: 65.8M
CGroup: /system.slice/containerd.service
└─3417 /usr/bin/containerd
Aug 04 07:23:48 ip-10-70-26-148.cn-north-1.compute.internal containerd[3417]: time="2021-08-04T07:23:48.977351355Z" level=info msg="trying next host" error="failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.2\ ": dial tcp 64.233.189.82:443: i/o timeout" host=k8s.gcr.io
Aug 04 07:23:48 ip-10-70-26-148.cn-north-1.compute.internal containerd[3417]: time="2021-08-04T07:23:48.978743674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9tbm,Uid:38582567-a5a0-469f-9e7e-40de961862f1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.2\ ": dial tcp 64.233.189.82:443: i/o timeout"
Aug 04 07:23:53 ip-10-70-26-148.cn-north-1.compute.internal containerd[3417]: time="2021-08-04T07:23:53.975036298Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:aws-node-5bghn,Uid:a598b5bf-4a79-42ec-99a6-f677a5ba488a,Namespace:kube-system,Attempt:0,}"
Aug 04 07:23:59 ip-10-70-26-148.cn-north-1.compute.internal containerd[3417]: time="2021-08-04T07:23:59.975112057Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-z9tbm,Uid:38582567-a5a0-469f-9e7e-40de961862f1,Namespace:kube-system,Attempt:0,}"

Looks like containerd service is not working as expected. The cause could be due to the fact that AWS are using Google’s container registry in China! However, it is well known fact that Google services are simply blocked by China in general.

Simply avoid switching your runtime to containerd in AWS China. EKS is not ready for that switch in Beijing region.

Summary and Conclusions

I have to say that this was a nice, pleasant and relatively fast upgrade. Yet again, no significant issues. I had absolutely zero issues making a switch from dockerd to containerd. Hope you will have the same easy job to perform. All workloads worked just fine. I didn’t have to modify anything really.

If you are interested in the entire terraform setup for EKS, you can find it on my GitHub -> https://github.com/marcincuber/eks/tree/master/terraform-aws

Hope this article nicely aggregates all the important information around upgrading EKS to version 1.21 and it will help people speed up their task.

Long story short, you hate and/or you love Kubernetes but you still use it ;).

Enjoy Kubernetes!!!

Sponsor Me

Like with any other story on Medium written by me, I performed the tasks documented. This is my own research and issues I have encountered.

Thanks for reading everybody. Marcin Cuber

--

--

Marcin Cuber

Principal Cloud Engineer, AWS Community Builder and Solutions Architect