Jenkins Kubernetes Plugin 0.9 Released

New features released in 0.9 include pipeline support and multiple containers per pod.

So now it is possible to define all the containers used in a job in your Jenkinsfile, for instance building a Maven project and a golang project in the same node without having to create any specific Docker image \o/

podTemplate(label: 'mypod', containers: [
    containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'golang', image: 'golang:1.6.3-alpine', ttyEnabled: true, command: 'cat')
  ],
  volumes: [secretVolume(secretName: 'shared-secrets', mountPath: '/etc/shared-secrets')]) {

  node ('mypod') {
    stage 'Get a Maven project'
    git 'https://github.com/jenkinsci/kubernetes-plugin.git'
    container('maven') {
      stage 'Build a Maven project'
      sh 'mvn clean install'
    }

    stage 'Get a Golang project'
    git url: 'https://github.com/hashicorp/terraform.git'
    container('golang') {
      stage 'Build a Go project'
      sh """
      mkdir -p /go/src/github.com/hashicorp
      ln -s `pwd` /go/src/github.com/hashicorp/terraform
      cd /go/src/github.com/hashicorp/terraform && make core-dev
      """
    }

  }
}

Changelog:

  • Make it possible to define more than one container inside a pod.
  • Add new pod template step which allows defining / overriding a pod template from a pipeline script.
  • Introduce pipeline step that allows choosing one of the containers of the pod and have all ‘sh’ steps executed there.
  • allow setting dynamic pod volumes in pipelines
  • Add support for persistent volume claims
  • Add support for containerEnvVar’s in pipelines
  • [JENKINS-37087] Handle multiple labels per pod correctly
  • [JENKINS-37087] Iterate over all matching templates
  • Fix slave description and labels
  • [JENKINS-38829] Add help text for Kubernetes server certificate
  • #59: Allow blank namespace and reuse whatever is discovered by the client.
  • Ensure instanceCap defaults to unlimited
  • Add Jenkins computer name to the container env vars
  • Split arguments having quotes into account
  • Allow the user to enable pseudo-TTY on container level.
  • Use provided arguments without forcing jnlpmac and name into them. Provide placeholders for jnlpmac and name for the user to use. Fallback container uses as default arguments jnlpmac and name.
  • Split volume classes into their own package (#77)

 

25 thoughts on “Jenkins Kubernetes Plugin 0.9 Released

    • hmm ok thanks for the info, all my npm command seems to be missing, while in the 0.8 plugin it functioning properly, do I need to separate jenkins slave for jnlp slave and the worker (which execute build steps defined)?

  1. Hi Carlos. I just updated the plugin from 0.8 to 0.9 to make use of persistent volume claims. When I choose that option from the dropdown, it adds it but doesn’t provide me any textboxes to enter anything. All the other dropdown options have textboxes to specify the volume information. Am I possibly missing something that will allow that to appear?

  2. I created a secret in Kubernetes via the following command:

    > kubectl create secret generic ssh-key –from-file=ssh-privatekey=/some/user/.ssh/id_rsa

    Below is my podTemplate. If I specify a directory that doesn’t already exists, I get an error saying “Directory doesn’t exists.” If I give an existing path; say /usr/local/share, there’s no error, but there’s nothing mounted there either. Is this a bug or an issue with my podTemplate? Thanks in advance.

    podTemplate(containers: [
    containerTemplate(
    alwaysPullImage: false,
    args: ‘${computer.jnlpmac} ${computer.name}’,
    command: ”,
    envVars: [],
    image: ‘/some/image’,
    name: ‘fq-jenkins’,
    privileged: false,
    ttyEnabled: true,
    workingDir: ‘/home/jenkins’
    )],

    inheritFrom: ”,
    label: ‘fq’,
    name: ”,
    nodeSelector: ”,
    serviceAccount: ”,
    volumes: [
    hostPathVolume(
    hostPath: ‘/usr/bin/docker’,
    mountPath: ‘/usr/bin/docker’
    ),
    hostPathVolume(
    hostPath: ‘/var/run/docker.sock’,
    mountPath: ‘/var/run/docker.sock’
    ),
    secretVolume(
    mountPath: ‘/usr/local/share’,
    secretName: ‘ssh-key’
    )
    ]
    ) {
    node (‘fq’) {
    stage ‘Show secrets’
    sh ‘ls -ll /usr/local/share/’
    }
    }

  3. Hello Carlos,

    I am attaching a ConfigMap volume to my build image. I notice the container was stuck in “ContainerCreating” state. Upon inspection, I noticed the name of the configmap did not match what was entered in the form in Jenkins:

    kubectl describe po –namespace jenkins fq-centos-6-494503a7ec61e

    Volumes:

    volume-3:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: volume-3 <— wrong name

    MountVolume.SetUp failed for volume "kubernetes.io/configmap/2b024c96-c214-11e6-ab98-42010af0004b-volume-3" (spec.Name: "volume-3") pod "2b024c96-c214-11e6-ab98-42010af0004b" (UID: "2b024c96-c214-11e6-ab98-42010af0004b") with: configmaps "volume-3" not found

  4. Hi,

    Is this resolved or is there way round this ?

    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    ——— ——– —– —- ————- ——– —— ——-
    2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned kubernetes-36906361c1934228b1c68073cd1b13ac-19e6147e664 to gke-jenkins-default-pool-272e742d-c9pl
    38s 38s 1 {kubelet gke-jenkins-default-pool-272e742d-c9pl} Warning FailedMount Unable to mount volumes for pod “kubernetes-36906361c1934228b1c68073cd1b13ac-19e6147e664_default(a0fef831-cc43-11e6-bab0-42010a84004e)”: timeout expired waiting for volumes to attach/mount for pod “kubernetes-36906361c1934228b1c68073cd1b13ac-19e6147e664″/”default”. list of unattached/unmounted volumes=[volume-0]
    38s 38s 1 {kubelet gke-jenkins-default-pool-272e742d-c9pl} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod “kubernetes-36906361c1934228b1c68073cd1b13ac-19e6147e664″/”default”. list of unattached/unmounted volumes=[volume-0]
    2m 30s 9 {kubelet gke-jenkins-default-pool-272e742d-c9pl} Warning FailedMount MountVolume.SetUp failed for volume “kubernetes.io/secret/a0fef831-cc43-11e6-bab0-42010a84004e-volume-0” (spec.Name: “volume-0”) pod “a0fef831-cc43-11e6-bab0-42010a84004e” (UID: “a0fef831-cc43-11e6-bab0-42010a84004e”) with: secrets “shared-secrets” not found

  5. Hi Carlos,

    I am Hong.

    Thanks very much for jenkins-kubernetes-plugin development.
    That’s great application in Jenkins for kubernetes.

    I saw “containerEnvVar” was supported by Jenkins Kubernetes Plugin 0.9.
    But, it seems not support some key value with dot, right?

    Example:
    containerEnvVar( key: “data.source.url”, value: “somewhere” )

    I tried to use backslash to escape dot, but Jenkins parser doesn’t pass it.

    Do you have any idea?
    Or someone have idea?

    Thanks very much again.

    Best regards,
    Hong

  6. I have this issue im trying to solve with declarative pipeline.

    I map configmap data to a specific path in the volume but seems not working.

    Below is my podtemp.yaml file

    apiVersion: v1
    kind: Pod
    metadata:
      name: slave-pod
    labels:
      name: jenkins-slave-k8s
    namespace: jenkins
    spec:
      containers:
        ## kubectl
      - image: lachlanevenson/k8s-kubectl:v1.10.13
        name: kubectl
        workingDir: /home/jenkins
        volumeMounts:
        - name: docker-sock-volume
          mountPath: /var/run/docker.sock
        - name: kubectl-config-volume
          mountPath: /root
        command:
        - cat
        tty: true
      volumes:
      - name: docker-sock-volume
        hostPath:
          path: /var/run/docker.sock
      - name: kubectl-config-volume
        configMap:
          name: dev-kubeconfig
          items:
          - key: kubectl.config
            path: kubectl.config
    
    Pipeline
    
    pipeline {
        agent {
            kubernetes {
              label 'pods-jenkins-slave'
              defaultContainer 'jnlp'
              yamlFile 'podtemp.yaml'
            }
        }
        stages {
        stage("test") {
        steps {
            container('kubectl') {
                script {
                    sh "ls -la /root"
                }
            }
        }
    }
    

    Pod start and running but the configmap file not being mounted in the container and the file not available.

    Hope you can help?

    Thanks

Leave a comment