Amazon EKS, IAM roles and kube2iam

Implementation details for kube2iam for Amazon Kubernetes.

Image for post
Image for post

Cluster

EKS is a managed Kubernetes solution delivered by Amazon in order to easy deploy, manage, and scale containerised applications. EKS is simply a controlplane running across multiple AWS availability zones to eliminate a single point of failure.

Provide IAM credentials to containers running inside a Kubernetes cluster based on annotations.

Currently my EKS cluster is running kube2iam, calico and aws-vpc-cni. I will be sharing the kube2iam setup which took me a while to get working correctly because of other resources.

Kube2iam

In AWS service level isolation is done using IAM roles. IAM roles are either associated with your EC2 instance or can me assumed by another IAM User or IAM Role as long as the trusted relationship is correctly configured.

Considering Kubernetes, we are dealing with containers. In EKS all containers are have the same permissions as the worker nodes. This is in many cases not desirable and allows containers to execute too many actions. This is not acceptable from a security perspective.

Kube2iam in this case becomes a solution which redirects the traffic that is going to the EC2 metadata API for docker containers to a container running on each instance, make a call to the AWS API to retrieve temporary credentials and return these to the caller. Other calls will be proxied to the EC2 metadata API.

Implementation of kube2iam on EKS

Here I am assuming that you have a working EKS cluster with worker nodes joined.

Starting with EKS worker nodes configuration. Ensure that the role associated with the workers’ instance has the policy with the following permissions;

{  
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowToAssumeRoles",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}

Next part of the configuration is ran by Kubernetes so you can use kubectl to simply apply the following yaml definition;

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube2iam
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube2iam
subjects:
- kind: ServiceAccount
name: kube2iam
namespace: default
roleRef:
kind: ClusterRole
name: kube2iam
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube2iam
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube2iam
namespace: default
labels:
app: kube2iam
spec:
selector:
matchLabels:
name: kube2iam
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: kube2iam
spec:
serviceAccountName: kube2iam
hostNetwork: true
containers:
- image: jtblin/kube2iam:0.10.7
imagePullPolicy: Always
name: kube2iam
args:
- "--auto-discover-base-arn"
- "--auto-discover-default-role=true"
- "--iptables=true"
- "--host-ip=$(HOST_IP)"
- "--node=$(NODE_NAME)"
- "--host-interface=eni+"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 8181
hostPort: 8181
name: http
securityContext:
privileged: true

The above configuration defines essential kube2iam resources; ServiceAccount, ClusterRole, ClusterRoleBinding and DaemonSet. All the templates run in a default namespace. This is mainly because I don’t believe it is part of kube-system namespace or a system component. Additionally, when ClusterAutoscaler config comes into play it is definitely better to run outside the kube-system namespace.

Configuration details

Kube2iam container is defined as a daemonset so a pod will run on each worker node. We set hostNetwork: true which means that the applications running inside that pod can directly see the network interfaces of the host machine where the pod was started.

The kube2iam daemon and iptables rule need to run before all other pods that would require access to AWS resources. Therefore, if you have any application’s pod started before kube2iam that requires to assume IAM role make sure to restart such pod.

As mentioned above, my EKS cluster is running calico and aws-vpc-cni, therefore I set interface with the following flag; host-interface=eni+. It is important to set this to eni+ as other Kubernetes solutions could be using different such as cali+, cni0, tun0, weave etc.

Last pieces of kube2iam configuration are dedicated to role/arn discovery. I am using the --auto-discover-base-arn flag which tells kube2iam pod to auto discover the base ARN via the EC2 metadata service. Secondly, I set--auto-discover-default-role flag so that kube2iam will auto discover the base ARN and the IAM role attached to the instance and use it as the fallback role to use when annotation is not set.

kube2iam usage

Now that you have deployed a fully working kube2iam solution, you can use it to allow your pods to assume any IAM role. In order to make the new IAM role assumable, you must have a Trust Relationship which allows them to be assumed by the Kubernetes worker role. In the following policy example ensure to replace role ARN with the role ARN of the role associated with your workers’ instance.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/kubernetes-worker-role"
},
"Action": "sts:AssumeRole"
}
]
}

Lastly, configuring your pod, deployment, daemonset or cronjob to use assume a role you need to use annotation. You simply need to add an iam.amazonaws.com/role annotation to your pods with the role that you want to assume for this pod. Some useful examples;

# Pod definition
apiVersion: v1
kind: Pod
metadata:
name: pod_name
annotations:
iam.amazonaws.com/role: role-arn
spec:
containers:
- image: ...
# Deployment definition
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment_name
spec:
replicas: 10
template:
metadata:
annotations:
iam.amazonaws.com/role: role-arn
# CronJob definition
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob_name
spec:
schedule: "00 * * * *"
jobTemplate:
spec:
template:
metadata:
annotations:
iam.amazonaws.com/role: role-arn

Conclusion

I shared a working implementation of kube2iam on EKS. Hopefully it will help anyone who reads the post. At this point kube2iam is the only feasible solution that would allow your pods to assume an IAM role. In general it is a great tool and I am big fan of the project. More about kube2iam can be found here.

As of writing this post, AWS is currently working on another solution which at the moment is called EKS IAM Roles for Pods. You can read more about it on the github issue.

In collaboration with NewRelic a technical blog on EKS and my work can be found -> https://blog.newrelic.com/product-news/news-uk-content-capabilities-amazon-eks-new-relic/

Written by

Lead Software/Infrastructure/Devops Engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store