Amazon EKS with OIDC provider, IAM Roles for Kubernetes services accounts

Marcin Cuber
9 min readSep 14, 2019

--

Find out how to configure OIDC provider with EKS and how to create IAM roles for service accounts. Full source code is available in github.

Overview

With the latest release of EKS (1.13 and 1.14), AWS Kubernetes control plane comes with support for IAM roles for service accounts. This feature allows us to associate an IAM role with a Kubernetes service account. We can now provision temporary credentials and then provide AWS permissions to the containers in any pod that uses that service account. Furthermore, we no longer need to provide extended permissions to the worker node IAM role so that pods on that node can call AWS APIs.

Before this feature was enabled, we had to make use of solutions kiam or kube2iam. You can read more about kube2iam in my other story.

In this story, you will find out how to configure EKS, OpenID Connect (OIDC) provider, IAM Roles and service accounts using Terraform. You will also find out about issues I encountered during my setup. For my implementation I am using EKS 1.14 (eks.1), Terraform 0.12.12, Terraform AWS provider 2.28.1.

Advantages

The IAM roles for service accounts feature provides the following benefits:

  • Least privilege- By using the IAM roles for service accounts feature, you no longer need to provide extended permissions to the worker node IAM role so that pods on that node can call AWS APIs. You can scope IAM permissions to a service account, and only pods that use that service account have access to those permissions.
  • Credential isolation- A container can only retrieve credentials for the IAM role that is associated with the service account to which it belongs. A container never has access to credentials that are intended for another container that belongs to another pod.
  • Auditability- Access and event logging is available through CloudTrail to help ensure retrospective auditing.

IAM Roles for Service Accounts Technical Overview

AWS IAM supports federated identities using OIDC. This feature allows us to authenticate AWS API calls with supported identity providers and receive a valid OIDC JSON web token (JWT). You can pass this token to the AWS STS AssumeRoleWithWebIdentity API operation and receive IAM temporary role credentials. Such credentials can be used to communicate with services likes Amazon S3 and DynamoDB.

Kubernetes has long used service accounts as its own internal identity system. Pods can authenticate with the Kubernetes API server using an auto-mounted token (which was a non-OIDC JWT) that only the Kubernetes API server could validate. These legacy service account tokens do not expire, and rotating the signing key is a difficult process. In Kubernetes version 1.12, support was added for a new ProjectedServiceAccountToken feature, which is an OIDC JSON web token that also contains the service account identity, and supports a configurable audience.

Amazon EKS now hosts a public OIDC discovery endpoint per cluster containing the signing keys for the ProjectedServiceAccountTokenJSON web tokens so external systems, like IAM, can validate and accept the OIDC tokens issued by Kubernetes. Source of the technical implementation .

Implementation and configuration

Using terraform I created EKS cluster and OIDC provider:

### Backend and provider config for reference
terraform {
required_version = "~> 0.12.12"
backend "remote" {
hostname = "app.terraform.io"
}
}
provider "aws" {
assume_role {
role_arn = var.assume_role_arn
session_name = "EKS_deployment_session_${var.tags["Environment"]}"
}
version = "~> 2.28.1"
region = var.region
}
### EKS cluster config
resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
name = "${var.tags["ServiceType"]}-${var.tags["Environment"]}"
role_arn = aws_iam_role.cluster.arn
version = "1.14"
vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
security_group_ids = [aws_security_group.cluster.id]
endpoint_private_access = "true"
endpoint_public_access = "true"
}
}
### OIDC config
resource "aws_iam_openid_connect_provider" "cluster" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = []
url = aws_eks_cluster.cluster.identity.0.oidc.0.issuer
}

You will be surprised but, the above two resources is sufficient to configure your EKS cluster and OIDC provider.

Here is the place where I spent hours trying to solve the issue with OIDC as my service accounts couldn’t get valid credentials. The reason it wasn’t working was the missing thumbprint for root CA. If you create OIDC provider for your EKS cluster using AWS Console, thumbprint automatically gets added. However, when you create it using terraform the list is empty.

[UPDATED 28/08/20]There are three solutions on getting thumbprint and adding it to your aws_iam_openid_connect_provider resource. Solution number three is the only fully supported by Terraform and achieved by using TLS terraform provider.

  1. Terraform external source of data
### External cli kubergrunt
data "external" "thumb" {
program = ["kubergrunt", "eks", "oidc-thumbprint", "--issuer-url", aws_eks_cluster.cluster.identity.0.oidc.0.issuer]
}
### OIDC config
resource "aws_iam_openid_connect_provider" "cluster" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.external.thumb.result.thumbprint]
url = aws_eks_cluster.cluster.identity.0.oidc.0.issuer
}

or without using kubegrunt binary:

Create files called oidc-thumbprint.sh with the following content

#!/bin/bashTHUMBPRINT=$(echo | openssl s_client -servername oidc.eks.${1}.amazonaws.com -showcerts -connect oidc.eks.${1}.amazonaws.com:443 2>&- | tac | sed -n '/-----END CERTIFICATE-----/,/-----BEGIN CERTIFICATE-----/p; /-----BEGIN CERTIFICATE-----/q' | tac | openssl x509 -fingerprint -noout | sed 's/://g' | awk -F= '{print tolower($2)}')
THUMBPRINT_JSON="{\"thumbprint\": \"${THUMBPRINT}\"}"
echo ${THUMBPRINT_JSON}

And then reference the above script in terraform resource which will fetch the thumbprint.

data "aws_region" "current" {}# Fetch OIDC provider thumbprint for root CA
data "external" "thumbprint" {
program = ["./oidc-thumbprint.sh", data.aws_region.current.name]
}
resource "aws_iam_openid_connect_provider" "cluster" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = concat([data.external.thumbprint.result.thumbprint], var.oidc_thumbprint_list)
url = aws_eks_cluster.cluster.identity.0.oidc.0.issuer
}

2. Obtain it manually

I actually obtained it manually and then added it to my terraform template. See the instruction how to get it:

  • Before you can obtain the thumbprint for an OIDC IdP, you need to have OpenSSL cli installed.
  • Start with the OIDC IdP’s URL (for example,https://server.example.com) this url is available when you click on your EKS cluster in AWS console, and then add /.well-known/openid-configuration to form the URL for the IdP's configuration document, such as the following: https://server.example.com/.well-known/openid-configuration
    Open this URL in a web browser, replacing server.example.com with your IdP's server name.
  • In the response in your browser find “jwks_uri”. Copy the fully qualified domain name of the URL. Do not include the https:// or any path that comes after the top-level domain. What you need may look like:
    server.example.com/sadasdasd/keys
  • Use OpenSSL command line tool to execute the following command which is using example output from above.
openssl s_client -servername server.example.com/sadasdasd/keys -showcerts -connect server.example.com:443/sadasdasd/keysFor some people the following command will also work:openssl s_client -servername server.example.com -showcerts -connect server.example.com:443
  • In your command window, scroll up until you see a certificate. If you see more than one certificate, find the last certificate that is displayed (at the bottom of the command output). This will be the certificate of the root CA in the certificate authority chain. Copy the certificate (including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines) and paste it into a text file. Then save the file with the file name ca_root.crt. Then execute the following:
openssl x509 -in ca_root.crt -fingerprint -noout
# the above line will output something like:
# SHA1 Fingerprint=9E:99:A4:8A:99:60:B1:49:26:BB:7F:3B:02:E2:2D:A2:B0:AB:72:80
# Remove the colon characters (:) from this string to produce the final thumbprint, like this:
echo -n "9E:99:A4:8A:99:60:B1:49:26:BB:7F:3B:02:E2:2D:A2:B0:AB:72:80" | sed 's|:||g'
# Final output
9E99A48A9960B14926BB7F3B02E22DA2B0AB7280

Now that you have obtained the thumbprint, your OIDC resource will be like:

### OIDC config
resource "aws_iam_openid_connect_provider" "cluster" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = ["9E99A48A9960B14926BB7F3B02E22DA2B0AB7280"]
url = aws_eks_cluster.cluster.identity.0.oidc.0.issuer
}

3. TLS provider [UPDATED 28/08/20]

With the release 2.2.0 -> https://github.com/hashicorp/terraform-provider-tls/releases/tag/v2.2.0 of TLS terraform provider, you can now obtain thumbprint very easily. See below:

data "tls_certificate" "cluster" {
url = aws_eks_cluster.cluster.identity.0.oidc.0.issuer
}
resource "aws_iam_openid_connect_provider" "cluster" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = concat([data.tls_certificate.cluster.certificates.0.sha1_fingerprint], var.oidc_thumbprint_list)
url = aws_eks_cluster.cluster.identity.0.oidc.0.issuer
}

Finishing this section, I would like to point out again that the OIDC thumbprint is an IAM requirement and not a feature of EKS. When setting up an IAM provider in the console, the root CA’s thumbprint is automatically selected. This fingerprint is a hash of the serving Certificate Authority. In Terraform you have to obtain it in a different way and three options are detailed above.

Restricting Access to Amazon EC2 Instance Profile Credentials

Before creating an actual example and updating our service account, we need to disable access to Amazon EC2 instance profile credentials. By default, containers that are running on your worker nodes are not prevented from accessing the credentials that are supplied to the worker node’s instance profile through the Amazon EC2 instance metadata server.

To prevent containers in pods from accessing the credential information supplied to the worker node instance profile by running the following iptables commands on your worker nodes (as root) or include them in your instance bootstrap user data script.

# commands block ALL containers from using the instance profile credentials
yum install -y iptables-services
iptables --insert FORWARD 1 --in-interface eni+ --destination 169.254.169.254/32 --jump DROP
iptables-save | tee /etc/sysconfig/iptables
systemctl enable --now iptables

Configure IAM Role and use it for Amazon VPC CNI

The Amazon VPC CNI plugin for Kubernetes is the networking plugin for pod networking in Amazon EKS clusters. The CNI plugin is responsible for allocating VPC IP addresses to Kubernetes nodes and configuring the necessary networking for pods on each node. The plugin requires IAM permissions, provided by the AWS managed policyAmazonEKS_CNI_Policy, to make calls to AWS APIs on your behalf. By default, this policy is attached to your worker node IAM role. Again, in this case I am using Terraform to create the IAM role.

  1. IAM Assume Role Policy

Create oidc_assume_role_policy.json file with the following content:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "${OIDC_ARN}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_URL}:sub": "system:serviceaccount:${NAMESPACE}:${SA_NAME}"
}
}
}
]
}

This is a generic assume role policy that I use for multiple IAM roles.

2. IAM Role configuration

Using terraform we are now going to create an IAM Role with the above assume role policy and also managed policy called AmazonEKS_CNI_Policy.

resource "aws_iam_role" "aws_node" {
name = "${var.tags["ServiceType"]}-${var.tags["Environment"]}-aws-node"
assume_role_policy = templatefile("oidc_assume_role_policy.json", { OIDC_ARN = aws_iam_openid_connect_provider.cluster.arn, OIDC_URL = replace(aws_iam_openid_connect_provider.cluster.url, "https://", ""), NAMESPACE = "kube-system", SA_NAME = "aws-node" })tags = merge(
var.tags,
{
"ServiceAccountName" = "aws-node"
"ServiceAccountNameSpace" = "kube-system"
}
)
depends_on = [aws_iam_openid_connect_provider.cluster]
}
resource "aws_iam_role_policy_attachment" "aws_node" {
role = aws_iam_role.aws_node.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
depends_on = [aws_iam_role.aws_node]
}

In the above template, you can see that I created a new IAM role (assume it is named: eks-dev-aws-node) which is going to be attached to a service account called aws-node which exists in kube-system namespace.

3. Update VPC CNI service account- aws-node

Currently, we have service account which doesn’t use IAM roles.

apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-node
namespace: kube-system

To use newly created IAM role, we create a new file (sa_aws_node.yaml) with the following content:

apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-node
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::0123456789:role/eks-dev-aws-node

Then you simply update your service account by running:

kubectl apply -f sa_aws_node.yaml

Trigger a roll out of the aws-node daemonset to apply the credential environment variables. The mutating web hook does not apply them to pods that are already running.

kubectl rollout restart -n kube-system daemonset.apps/aws-node

Watch the roll out, and wait for the DESIRED count of the deployment matches theUP-TO-DATE count.

kubectl get -n kube-system daemonset.apps/aws-node --watch

Once all pods are rotated, describe one of the pods and verify that the AWS_WEB_IDENTITY_TOKEN_FILE and AWS_ROLE_ARN environment variables exist.

kubectl exec -n kube-system aws-node-9rgzw env | grep AWS# Expected Output:
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token AWS_VPC_K8S_CNI_LOGLEVEL=DEBUG AWS_ROLE_ARN=arn:aws:iam::0123456789:role/eks-dev-aws-node

Now that your service account for AWS VPC CNI is using an IAM Role. You can remove the AmazonEKS_CNI_Policy policy from your worker node IAM role.

Conclusion

IAM Roles for service accounts is relatively new feature and many Kubernetes addons may not support it (info is accurate for 23/10/2019). However, such support will be added relatively quickly and released for public use.

Currently following addons support IAM roles for service accounts: external-dns, aws-vpc-cni, alb-ingress-controller and cluster-autoscaler.

From my initial experience, it wasn’t a quick task to get it working. However, putting this guide together, hopefully will help someone else to save some time.

Remember that for this feature to work correctly, you’ll need to use an SDK version greater than or equal to the values listed below:

Stackoverflow issues that have been resolved using this story:

  1. https://stackoverflow.com/questions/58507993/eks-iam-roles-for-services-account-not-working/58519269#58519269

In collaboration with NewRelic a technical blog on EKS and my work can be found -> https://blog.newrelic.com/product-news/news-uk-content-capabilities-amazon-eks-new-relic/

Sponsor Me

Like with any other story on Medium written by me, I performed the tasks documented. This is my own research and issues I have encountered.

Thanks for reading everybody. Marcin Cuber

--

--

Marcin Cuber

Principal Cloud Engineer, AWS Community Builder and Solutions Architect