Building Docker Images without Docker

Kaniko-Logo
Kaniko is a project launched by Google that allows building Dockerfiles without Docker or the Docker daemon.

Kaniko can be used inside Kubernetes to build a Docker image and push it to a registry, supporting Docker registry, Google Container Registry and AWS ECR, as well as any other registry supported by Docker credential helpers.

This solution is still not safe, as containers run as root, but it is way better than mounting the Docker socket and launching containers in the host. For one there are no leaked resources or containers running outside the scheduler.

To launch Kaniko from Jenkins in Kubernetes just need an agent template that uses the debug Kaniko image (just to have cat and nohup) and a Kubernetes secret with the image registry credentials, as shown in this example pipeline.

UPDATED: some changes needed for the latest Kaniko

/**
 * This pipeline will build and deploy a Docker image with Kaniko
 * https://github.com/GoogleContainerTools/kaniko
 * without needing a Docker host
 *
 * You need to create a jenkins-docker-cfg secret with your docker config
 * as described in
 * https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
 */

 def label = "kaniko-${UUID.randomUUID().toString()}"

 podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    volumeMounts:
      - name: jenkins-docker-cfg
        mountPath: /root
  volumes:
  - name: jenkins-docker-cfg
    projected:
      sources:
      - secret:
          name: regcred
          items:
            - key: .dockerconfigjson
              path: .docker/config.json
"""
  ) {

   node(label) {
     stage('Build with Kaniko') {
       git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
       container(name: 'kaniko', shell: '/busybox/sh') {
           sh '''#!/busybox/sh
           /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --insecure-skip-tls-verify --destination=mydockerregistry:5000/myorg/myimage
           '''
       }
     }
   }
 }

Pros:

  • No need to mount docker socket or have docker binary
  • No stray containers running outside of the scheduler

Cons:

  • Still not secure
  • Does not support the full Dockerfile syntax yet

Skaffold also has support for Kaniko, and can be used in your Jenkins X pipelines, which use Skaffold to abstract the image building.

Installing kube2iam in AWS Kubernetes Kops Cluster

kuberneteskube2iam allows a Kubernetes cluster in AWS to use different IAM roles for each pod, and prevents pods from accessing EC2 instance IAM roles.

Installation

Edit your kops cluster with kops edit cluster to allow nodes to assume different roles, changing the account id 123456789012 to yours

 spec:
  additionalPolicies:
    nodes: |
      [
        {
          "Effect": "Allow",
          "Action": [
            "sts:AssumeRole"
          ],
          "Resource": [
            "arn:aws:iam::123456789012:role/k8s-*"
          ]
        }
      ]

Install kube2iam using the helm chart

helm install stable/kube2iam --name my-release \
  --set=extraArgs.base-role-arn=arn:aws:iam::123456789012:role/,extraArgs.default-role=kube2iam-default,host.iptables=true,host.interface=cbr0

A curl to the metadata server from a new pod should return kube2iam

$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
kube2iam

Role configuration

Create the roles that the pods can assume. They must start with k8s- (see the wildcard we set in the Resource above) and contain a trust relationship to the node pool role.

For instance, to allow access to the S3 bucket mybucket from a pod, create a role k8s-s3.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "s3bucketActions",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::mybucket",
       }
    ]
}

Then edit the trust relationship of the role to allow the node role (the role created by Kops for the Auto Scaling Goup) to assume this role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:role/nodes.example.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Test it by launching a pod with the right annotations

apiVersion: v1
kind: Pod
metadata:
  name: aws-cli
  labels:
    name: aws-cli
  annotations:
    iam.amazonaws.com/role: arn:aws:iam::123456789012:role/k8s-s3
spec:
  containers:
  - image: fstab/aws-cli
    command:
      - "/home/aws/aws/env/bin/aws"
      - "s3"
      - "ls"
      - "some-bucket"
    name: aws-cli

Securing namespaces

kube2iam supports namespace restrictions so users can still launch pods but are limited to a predefined set of IAM roles that can assume.

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    iam.amazonaws.com/allowed-roles: |
      ["my-custom-path/*"]
  name: default

A similar setup should also work with AWS EKS.

Kubernetes Plugin for Jenkins 1.7.1 Security Release

A minor security issue has been found and fixed in 1.7.1

  • Do not print credentials in build output or logs. Only affects certain pipeline steps like withDockerRegistrysh step is not affected

Other interesting new feature is the support of multiple containers in declarative pipeline #306 JENKINS-48135

</pre>
<pre>pipeline {
  agent {
    kubernetes {
      label 'mypod'
      defaultContainer 'jnlp'
      yaml """
apiVersion: v1
kind: Pod
metadata:
  labels:
    some-label: some-label-value
spec:
  containers:
  - name: maven
    image: maven:alpine
    command:
    - cat
    tty: true
  - name: busybox
    image: busybox
    command:
    - cat
    tty: true
"""
    }
  }
  stages {
    stage('Run maven') {
      steps {
        container('maven') {
          sh 'mvn -version'
        }
        container('busybox') {
          sh '/bin/busybox'
        }
      }
    }
  }
}</pre>
<pre>

1.7.1

  • Do not print credentials in build output or logs. Only affects certain pipeline steps like withDockerRegistrysh step is not affected SECURITY-883

1.7.0

  • Add option to apply caps only on alive pods #252
  • Add idleMinutes to pod template in declarative pipeline #336 JENKINS-51569

1.6.4

  • Use Jackson and Apache HttpComponents Client libraries from API plugins #333 JENKINS-51582

1.6.3

1.6.2

  • Transfer any master proxy related envs that the remoting jar uses to the pod templates with addMasterProxyEnvVarsoption #321

1.6.1

  • Some fields are not inherited from parent template (InheritFrom, InstanceCap, SlaveConnectTimeout, IdleMinutes, ActiveDeadlineSeconds, ServiceAccount, CustomWorkspaceVolumeEnabled) #319

1.6.0

  • Support multiple containers in declarative pipeline #306 JENKINS-48135
  • Expose pod configuration via yaml to UI and merge tolerations when inheriting #311
  • Resolve NPE merging yaml when resource requests/limits are not set #310
  • Do not pass arguments to jnlp container #315 JENKINS-50913

1.5.2

1.5.1

You can find the full changelog in GitHub.

Kubernetes Plugin for Jenkins 1.5

15 releases have gone by in 7 months since 1.0 last September

Some interesting new features since 1.0 and a lot of bugfixes and overall stability improvements. For instance now you can use yaml to define the Pod that will be used for your job:

def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
  labels:
    some-label: some-label-value
spec:
  containers:
  - name: busybox
    image: busybox
    command:
    - cat
    tty: true
"""
) {
    node (label) {
      container('busybox') {
        sh "hostname"
      }
    }
}

 

You can use readFile step to load the yaml from a file in your git repo.

  • Allow creating Pod templates from yaml. This allows setting all possible fields in Kubernetes API using yaml JENKINS-50282 #275
  • Support passing kubeconfig file as credentials using secretFile credentials JENKINS-49817 #294

You can find the full changelog in GitHub.