Installing kube2iam in AWS Kubernetes Kops Cluster

kuberneteskube2iam allows a Kubernetes cluster in AWS to use different IAM roles for each pod, and prevents pods from accessing EC2 instance IAM roles.

Installation

Edit your kops cluster with kops edit cluster to allow nodes to assume different roles, changing the account id 123456789012 to yours

 spec:
  additionalPolicies:
    nodes: |
      [
        {
          "Effect": "Allow",
          "Action": [
            "sts:AssumeRole"
          ],
          "Resource": [
            "arn:aws:iam::123456789012:role/k8s-*"
          ]
        }
      ]

Install kube2iam using the helm chart

helm install stable/kube2iam --name my-release \
  --set=extraArgs.base-role-arn=arn:aws:iam::123456789012:role/,extraArgs.default-role=kube2iam-default,host.iptables=true,host.interface=cbr0

A curl to the metadata server from a new pod should return kube2iam

$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
kube2iam

Role configuration

Create the roles that the pods can assume. They must start with k8s- (see the wildcard we set in the Resource above) and contain a trust relationship to the node pool role.

For instance, to allow access to the S3 bucket mybucket from a pod, create a role k8s-s3.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "s3bucketActions",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::mybucket",
       }
    ]
}

Then edit the trust relationship of the role to allow the node role (the role created by Kops for the Auto Scaling Goup) to assume this role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:role/nodes.example.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Test it by launching a pod with the right annotations

apiVersion: v1
kind: Pod
metadata:
  name: aws-cli
  labels:
    name: aws-cli
  annotations:
    iam.amazonaws.com/role: arn:aws:iam::123456789012:role/k8s-s3
spec:
  containers:
  - image: fstab/aws-cli
    command:
      - "/home/aws/aws/env/bin/aws"
      - "s3"
      - "ls"
      - "some-bucket"
    name: aws-cli

Securing namespaces

kube2iam supports namespace restrictions so users can still launch pods but are limited to a predefined set of IAM roles that can assume.

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    iam.amazonaws.com/allowed-roles: |
      ["my-custom-path/*"]
  name: default

A similar setup should also work with AWS EKS.