Installing kube2iam in AWS Kubernetes EKS Cluster

kubernetes

This is a follow up to Installing kube2iam in AWS Kubernetes Kops Cluster.

kube2iam allows a Kubernetes cluster in AWS to use different IAM roles for each pod, and prevents pods from accessing EC2 instance IAM roles.

Installation

Edit the node IAM role (ie. ​EKS-attractive-party-000-D-NodeInstanceRole-XXX) to allow nodes to assume different roles, changing the account id 123456789012 to yours or using "Resource": "*"
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": [
                "arn:aws:iam::123456789012:role/k8s-*"
            ]
        }
    ]
}

Install kube2iam using the helm chart

helm install stable/kube2iam --name my-release \
  --namespace kube-system \
  --set=rbac.create=true,\
        extraArgs.auto-discover-base-arn=,\
        extraArgs.auto-discover-default-role=true,\
        host.iptables=true,\
        host.interface=eni+

Note the eni+ host interface name.

A curl to the metadata server from a new pod should return kube2iam

$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
kube2iam

Role configuration

Create the roles that the pods can assume. They must start with k8s- (see the wildcard we set in the Resource above) and contain a trust relationship to the node pool role.

For instance, to allow access to the S3 bucket mybucket from a pod, create a role k8s-s3.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "s3bucketActions",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::mybucket",
       }
    ]
}

Then edit the trust relationship of the role to allow the node role (the role used by your nodes Auto Scaling Goup) to assume this role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:role/nodes.example.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Test it by launching a pod with the right annotations

apiVersion: v1
kind: Pod
metadata:
  name: aws-cli
  labels:
    name: aws-cli
  annotations:
    iam.amazonaws.com/role: k8s-s3
spec:
  containers:
  - image: fstab/aws-cli
    command:
      - "/home/aws/aws/env/bin/aws"
      - "s3"
      - "ls"
      - "some-bucket"
    name: aws-cli

Securing namespaces

kube2iam supports namespace restrictions so users can still launch pods but are limited to a predefined set of IAM roles that can assume.

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    iam.amazonaws.com/allowed-roles: |
      ["my-custom-path/*"]
  name: default

Sending Kubernetes Logs to CloudWatch Logs using Fluentd

fluentd-logofluentd can send all the Kubernetes or EKS logs to CloudWatch Logs to have a centralized and unified view of all the logs from the cluster, both from the nodes and from each container stdout.

Installation

To send all nodes and container logs to CloudWatch, create a CloudWatch log group named kubernetes.

aws logs create-log-group --log-group-name kubernetes

Then install fluentd-cloudwatch helm chart. This will send logs from node, containers, etcd,… to CloudWatch as defined in the default fluentd chart config.

helm install --name fluentd incubator/fluentd-cloudwatch \
  --set awsRegion=us-east-1,rbac.create=true

Each node needs to have permissions to write to CloudWatch Logs, so either add the permission using IAM instance profiles or pass the awsRole if your are using kube2iam.

helm install --name fluentd incubator/fluentd-cloudwatch \
  --set awsRole=arn:aws:iam::123456789012:role/k8s-logs,awsRegion=us-east-1,rbac.create=true,extraVars[0]="{ name: FLUENT_UID, value: '0' }"

The k8s-logs role policy is configured as

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "logs",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}

Now you can go to CloudWatch and find your logs.

Installing kube2iam in AWS Kubernetes Kops Cluster

kubernetes

Update: See the follow up Installing kube2iam in AWS Kubernetes EKS Cluster

kube2iam allows a Kubernetes cluster in AWS to use different IAM roles for each pod, and prevents pods from accessing EC2 instance IAM roles.

Installation

Edit your kops cluster with kops edit cluster to allow nodes to assume different roles, changing the account id 123456789012 to yours

 spec:
  additionalPolicies:
    nodes: |
      [
        {
          "Effect": "Allow",
          "Action": [
            "sts:AssumeRole"
          ],
          "Resource": [
            "arn:aws:iam::123456789012:role/k8s-*"
          ]
        }
      ]

Install kube2iam using the helm chart

helm install stable/kube2iam --namespace kube-system --name my-release \
  --set=extraArgs.base-role-arn=arn:aws:iam::123456789012:role/,extraArgs.default-role=kube2iam-default,host.iptables=true,host.interface=cbr0,rbac.create=true

A curl to the metadata server from a new pod should return kube2iam

$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
kube2iam

Role configuration

Create the roles that the pods can assume. They must start with k8s- (see the wildcard we set in the Resource above) and contain a trust relationship to the node pool role.

For instance, to allow access to the S3 bucket mybucket from a pod, create a role k8s-s3.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "s3bucketActions",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::mybucket",
       }
    ]
}

Then edit the trust relationship of the role to allow the node role (the role created by Kops for the Auto Scaling Goup) to assume this role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:role/nodes.example.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Test it by launching a pod with the right annotations

apiVersion: v1
kind: Pod
metadata:
  name: aws-cli
  labels:
    name: aws-cli
  annotations:
    iam.amazonaws.com/role: arn:aws:iam::123456789012:role/k8s-s3
spec:
  containers:
  - image: fstab/aws-cli
    command:
      - "/home/aws/aws/env/bin/aws"
      - "s3"
      - "ls"
      - "some-bucket"
    name: aws-cli

Securing namespaces

kube2iam supports namespace restrictions so users can still launch pods but are limited to a predefined set of IAM roles that can assume.

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    iam.amazonaws.com/allowed-roles: |
      ["my-custom-path/*"]
  name: default