
This is a follow up to Installing kube2iam in AWS Kubernetes Kops Cluster.
kube2iam allows a Kubernetes cluster in AWS to use different IAM roles for each pod, and prevents pods from accessing EC2 instance IAM roles.
Installation
Edit the node IAM role (ie. EKS-attractive-party-000-D-NodeInstanceRole-XXX
) to allow nodes to assume different roles, changing the account id 123456789012
to yours or using "Resource": "*"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::123456789012:role/k8s-*"
]
}
]
}
Install kube2iam using the helm chart
helm install stable/kube2iam --name my-release \
--namespace kube-system \
--set=rbac.create=true,\
extraArgs.auto-discover-base-arn=,\
extraArgs.auto-discover-default-role=true,\
host.iptables=true,\
host.interface=eni+
Note the eni+
host interface name.
A curl to the metadata server from a new pod should return kube2iam
$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
kube2iam
Role configuration
Create the roles that the pods can assume. They must start with k8s-
(see the wildcard we set in the Resource
above) and contain a trust relationship to the node pool role.
For instance, to allow access to the S3 bucket mybucket
from a pod, create a role k8s-s3
.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3bucketActions",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket",
}
]
}
Then edit the trust relationship of the role to allow the node role (the role used by your nodes Auto Scaling Goup) to assume this role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/nodes.example.com"
},
"Action": "sts:AssumeRole"
}
]
}
Test it by launching a pod with the right annotations
apiVersion: v1
kind: Pod
metadata:
name: aws-cli
labels:
name: aws-cli
annotations:
iam.amazonaws.com/role: k8s-s3
spec:
containers:
- image: fstab/aws-cli
command:
- "/home/aws/aws/env/bin/aws"
- "s3"
- "ls"
- "some-bucket"
name: aws-cli
Securing namespaces
kube2iam supports namespace restrictions so users can still launch pods but are limited to a predefined set of IAM roles that can assume.
apiVersion: v1
kind: Namespace
metadata:
annotations:
iam.amazonaws.com/allowed-roles: |
["my-custom-path/*"]
name: default
Like this:
Like Loading...