Jenkins Kubernetes Plugin 0.10 Released

The 0.10 release is mostly a bugfix release

Changelog for 0.10:

  • Fixing checkbox serialization by jelly views #110
  • Do not throw exceptions in the test configuration page #107
  • Upgrade to the latest kubernetes-client version. #106
  • feat: make pipeline support instanceCap field #102
  • Instantiating Kubernetes Client with proper config in Container Steps #104
  • Fix NPE when slaves are read from disk #103
  • [JENKINS-39867] Upgrade fabric8 to 1.4.26 #101
  • The pod watcher now checks readiness of the right pod. #97
  • Fix logic for waitUntilContainerIsReady #95
  • instanceCap is not used in pipeline #92
  • Allow nesting of templates for inheritance. #94
  • Wait until all containers are in ready state before starting the slave #93
  • Adding basic retention for idle slaves, using the idleTimeout setting properly #91
  • Improve the inheritFrom functionality to better cover containers and volumes. #84
  • Fix null pointer exceptions. #89
  • fix PvcVolume jelly templates path #90
  • Added tool installations to the pod template. #85
  • fix configmap volume name #87
  • set the serviceAccount when creating new pods #86
  • Read and connect timeout are now correctly used to configure the client. #82
  • Fix nodeSelector in podTemplate #83
  • Use the client’s namespace when deleting a pod (fixes a regression preventing pods to delete). #81

11 thoughts on “Jenkins Kubernetes Plugin 0.10 Released

  1. Hi Carlos,

    I just tried updating to this plugin, however when executing my pipeline i’m running into the following error message after upgrading (Jenkins 2.19.4). I have since downgraded to 0.9. Have any suggestions as to what to try?

    GitHub has been notified of this commit’s build result Failed to mkdirs: /root/workspace/build_path
    at hudson.FilePath.mkdirs(
    at hudson.plugins.git.GitSCM.createClient(
    at hudson.plugins.git.GitSCM.checkout(
    at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(
    at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$
    at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$
    at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$
    at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$
    at java.util.concurrent.Executors$
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$
    Finished: FAILURE

  2. Hi Carlos,

    Great plugin! Do you have any advice on how to use docker within the slaves? It seems wrong to mount the docker socket of a kubernetes node :(. Any other alternatives to be able to build images and run tests on them within the jenkins builds?


  3. Hi Carlos,

    Great plugin! Do you have any advice on how to build docker images and run test on them within the jenkins builds? It seems wrong to mount the docker socket of a kubernetes host to the jenkins slave containers 😦

  4. Hi Carlos,

    Thanks for the plugin! It’s very handy.
    Also, thanks for the article at – makes a good sense.

    However, I have a problem trying to achieve what you did in the article using your Kubernetes plugin (0.10). It seems that even if I define the image to use for slave creation (Swarm slave image), the plugin creates a JNLP slave anyway 🙂 In this case it also takes the label that I’ve defined for the custom slave container (for example, “myslave”). So when a job bound to “myslave” label starts, 2 slaves are created (JNLP and Swarm), and it uses the JNLP slave, so Swarm slave is useless.

    IMO the plugin lacks a checkbox like “Container will start the slave process”, in case of its selection your cloud extension just shouldn’t create anything.
    Is this a bug, or I’m doing something wrong?
    I can create an issue and a pull request in the GutHub if needed.

    Thanks in advance!

      • Hi Carlos,

        I didn’t create the slave myself, plugin created it for me. The only problem is that it also created a JNLP slave, which I don’t want to be present, and in the plugin config I didn’t ask for it to be spawned.
        I have posted screenshots on imgur (I hope that’s fine) to illustrate what I mean.

        Here’s the plugin config (never mind the IPs, for some reason I can’t make Jenkins master cluster IPs to get resolved in slave containers, so can’t use neither cluster IPs nor DNS references at the moment):
        Here’s the config of the job that echoes “Hello world” on a “myslave” labelled slave:
        After I start the job, these two guys appear:
        First one is JNLP slave: (it stole “myslave” label!)
        The second one is Swarm slave: (without “myslave” label!)
        Job uses JNLP slave, while Swarm slave is useless.
        Am I doing something wrong or this is a bug? If I create a Replication Controller via Kubectl (not via Kube plugin), only Swarm slave connects to the master, no JNLP one is created:

        By saying “You can overwrite it by creating a container called jnlp” – do you mean that I can disable this behaviour by manually creating a container called “jnlp” in Kube?

        I can work around the problem by defining another label in plugin, and let my own custom Swarm slave define its labels himself – but I just don’t want those JNLP slaves to appear. For 1 job instance 2 slaves are created by plugin each time.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s