The 0.10 release is mostly a bugfix release
Changelog for 0.10:
- Fixing checkbox serialization by jelly views #110
- Do not throw exceptions in the test configuration page #107
- Upgrade to the latest kubernetes-client version. #106
- feat: make pipeline support instanceCap field #102
- Instantiating Kubernetes Client with proper config in Container Steps #104
- Fix NPE when slaves are read from disk #103
- [JENKINS-39867] Upgrade fabric8 to 1.4.26 #101
- The pod watcher now checks readiness of the right pod. #97
- Fix logic for waitUntilContainerIsReady #95
- instanceCap is not used in pipeline #92
- Allow nesting of templates for inheritance. #94
- Wait until all containers are in ready state before starting the slave #93
- Adding basic retention for idle slaves, using the idleTimeout setting properly #91
- Improve the inheritFrom functionality to better cover containers and volumes. #84
- Fix null pointer exceptions. #89
- fix PvcVolume jelly templates path #90
- Added tool installations to the pod template. #85
- fix configmap volume name #87
- set the serviceAccount when creating new pods #86
- Read and connect timeout are now correctly used to configure the client. #82
- Fix nodeSelector in podTemplate #83
- Use the client’s namespace when deleting a pod (fixes a regression preventing pods to delete). #81
I just tried updating to this plugin, however when executing my pipeline i’m running into the following error message after upgrading (Jenkins 2.19.4). I have since downgraded to 0.9. Have any suggestions as to what to try?
GitHub has been notified of this commit’s build result
java.io.IOException: Failed to mkdirs: /root/workspace/build_path
Great plugin! Do you have any advice on how to use docker within the slaves? It seems wrong to mount the docker socket of a kubernetes node :(. Any other alternatives to be able to build images and run tests on them within the jenkins builds?
Great plugin! Do you have any advice on how to build docker images and run test on them within the jenkins builds? It seems wrong to mount the docker socket of a kubernetes host to the jenkins slave containers 😦
There are no good options yet to do it differently, until docker in docker works properly
How to make docker build work inside jenkins slaves? I added /var/run/docker.sock mount but it still complains docker not found. Thanks!
You need the docker client installed in the image too
Thanks for the plugin! It’s very handy.
Also, thanks for the article at https://www.infoq.com/articles/scaling-docker-kubernetes-v1 – makes a good sense.
However, I have a problem trying to achieve what you did in the article using your Kubernetes plugin (0.10). It seems that even if I define the image to use for slave creation (Swarm slave image), the plugin creates a JNLP slave anyway 🙂 In this case it also takes the label that I’ve defined for the custom slave container (for example, “myslave”). So when a job bound to “myslave” label starts, 2 slaves are created (JNLP and Swarm), and it uses the JNLP slave, so Swarm slave is useless.
IMO the plugin lacks a checkbox like “Container will start the slave process”, in case of its selection your cloud extension just shouldn’t create anything.
Is this a bug, or I’m doing something wrong?
I can create an issue and a pull request in the GutHub if needed.
Thanks in advance!
Hi, with the kubernetes plugin you don’t need to add swarm slaves, the plugin creates the slave for you. You can overwrite it by creating a container called jnlp
I didn’t create the slave myself, plugin created it for me. The only problem is that it also created a JNLP slave, which I don’t want to be present, and in the plugin config I didn’t ask for it to be spawned.
I have posted screenshots on imgur (I hope that’s fine) to illustrate what I mean.
Here’s the plugin config (never mind the IPs, for some reason I can’t make Jenkins master cluster IPs to get resolved in slave containers, so can’t use neither cluster IPs nor DNS references at the moment): http://imgur.com/mXTNEC0.png
Here’s the config of the job that echoes “Hello world” on a “myslave” labelled slave: http://i.imgur.com/9xODiQS.png
After I start the job, these two guys appear: http://i.imgur.com/i3xugiw.png
First one is JNLP slave: http://i.imgur.com/WHr6lYR.png (it stole “myslave” label!)
The second one is Swarm slave: http://i.imgur.com/WojaIZo.png (without “myslave” label!)
Job uses JNLP slave, while Swarm slave is useless.
Am I doing something wrong or this is a bug? If I create a Replication Controller via Kubectl (not via Kube plugin), only Swarm slave connects to the master, no JNLP one is created: http://i.imgur.com/tV5mN3j.png
By saying “You can overwrite it by creating a container called jnlp” – do you mean that I can disable this behaviour by manually creating a container called “jnlp” in Kube?
I can work around the problem by defining another label in plugin, and let my own custom Swarm slave define its labels himself – but I just don’t want those JNLP slaves to appear. For 1 job instance 2 slaves are created by plugin each time.
You don’t add swarm slaves with the kubernetes plugin. Just remove it and forget about it
Then customize your jnlp slave if you want https://github.com/jenkinsci/kubernetes-plugin/#pipeline-support