Running a JVM in a Container Without Getting Killed II

A follow up to Running a JVM in a Container Without Getting Killed

In Java 10 there is improved container integration.
No need to add extra flags, the JVM will use 1/4 of the container memory for heap.

$ docker run -m 1GB openjdk:10 java -XshowSettings:vm \
    -version
VM settings:
    Max. Heap Size (Estimated): 247.50M
    Using VM: OpenJDK 64-Bit Server VM

openjdk version "10.0.1" 2018-04-17
OpenJDK Runtime Environment (build 10.0.1+10-Debian-4)
OpenJDK 64-Bit Server VM (build 10.0.1+10-Debian-4, mixed mode)

Java 10 obsoletes the -XX:MaxRAM parameter, as the JVM will correctly detect the value.

You can still use the -XX:MaxRAMFraction=1 option to squeeze all the memory from the container.

$ docker run -m 1GB openjdk:10 java -XshowSettings:vm \
    -XX:MaxRAMFraction=1 -version
OpenJDK 64-Bit Server VM warning: Option MaxRAMFraction was deprecated in version 10.0 and will likely be removed in a future release.
VM settings:
    Max. Heap Size (Estimated): 989.88M
    Using VM: OpenJDK 64-Bit Server VM

openjdk version "10.0.1" 2018-04-17
OpenJDK Runtime Environment (build 10.0.1+10-Debian-4)
OpenJDK 64-Bit Server VM (build 10.0.1+10-Debian-4, mixed mode)

But it can be risky if your container uses off heap memory, as almost all the container memory is allocated to heap. You would have to either set -XX:MaxRAMFraction=2 and use only 50% of the container memory for heap, or resort to Xmx.

Building Docker Images without Docker

Kaniko-Logo
Kaniko is a project launched by Google that allows building Dockerfiles without Docker or the Docker daemon.

Kaniko can be used inside Kubernetes to build a Docker image and push it to a registry, supporting Docker registry, Google Container Registry and AWS ECR, as well as any other registry supported by Docker credential helpers.

This solution is still not safe, as containers run as root, but it is way better than mounting the Docker socket and launching containers in the host. For one there are no leaked resources or containers running outside the scheduler.

To launch Kaniko from Jenkins in Kubernetes just need an agent template that uses the debug Kaniko image (just to have cat and nohup) and a Kubernetes secret with the image registry credentials, as shown in this example pipeline.

UPDATED: some changes needed for the latest Kaniko

/**
 * This pipeline will build and deploy a Docker image with Kaniko
 * https://github.com/GoogleContainerTools/kaniko
 * without needing a Docker host
 *
 * You need to create a jenkins-docker-cfg secret with your docker config
 * as described in
 * https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
 */

 def label = "kaniko-${UUID.randomUUID().toString()}"

 podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    volumeMounts:
      - name: jenkins-docker-cfg
        mountPath: /root
  volumes:
  - name: jenkins-docker-cfg
    projected:
      sources:
      - secret:
          name: regcred
          items:
            - key: .dockerconfigjson
              path: .docker/config.json
"""
  ) {

   node(label) {
     stage('Build with Kaniko') {
       git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
       container(name: 'kaniko', shell: '/busybox/sh') {
           sh '''#!/busybox/sh
           /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --insecure-skip-tls-verify --destination=mydockerregistry:5000/myorg/myimage
           '''
       }
     }
   }
 }

Pros:

  • No need to mount docker socket or have docker binary
  • No stray containers running outside of the scheduler

Cons:

  • Still not secure
  • Does not support the full Dockerfile syntax yet

Skaffold also has support for Kaniko, and can be used in your Jenkins X pipelines, which use Skaffold to abstract the image building.

Speaking Trips on DevOps, Kubernetes, Jenkins

This 2nd half of the year speaking season is starting and you’ll find me speaking about DevOps, Kubernetes, Jenkins,… at

If you organize a conference and would like me to give a talk in 2018 you can find me @csanchez.

Screen Shot 2017-08-24 at 17.07.45.png

Running a JVM in a Container Without Getting Killed

No pun intended

The JDK 8u131 has backported a nice feature in JDK 9, which is the ability of the JVM to detect how much memory is available when running inside a Docker container.

I have talked multiple times about the problems running a JVM inside a container, how it will default in most cases to a max heap of 1/4 of the host memory, not the container.

For example in my machine

$ docker run -m 100MB openjdk:8u121 java -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 444.50M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM

Wait, WAT? I set a container memory of 100MB and my JVM sets a max heap of 444M ? It is very likely that it is going to cause the Kernel to kill my JVM at some point.

Let’s try the JDK 8u131 with the experimental option -XX:+UseCGroupMemoryLimitForHeap

$ docker run -m 100MB openjdk:8u131 java \
  -XX:+UnlockExperimentalVMOptions \
  -XX:+UseCGroupMemoryLimitForHeap \
  -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 44.50M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM

Ok this makes more sense, the JVM was able to detect the container has only 100MB and set the max heap to 44M.

Let’s try in a bigger container

$ docker run -m 1GB openjdk:8u131 java \
  -XX:+UnlockExperimentalVMOptions \
  -XX:+UseCGroupMemoryLimitForHeap \
  -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 228.00M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM

Mmm, now the container has 1GB but JVM is only using 228M as max heap. Can we optimize this even more, given that nothing else other than the JVM is running in the container? Yes we can!

$ docker run -m 1GB openjdk:8u131 java \
  -XX:+UnlockExperimentalVMOptions \
  -XX:+UseCGroupMemoryLimitForHeap \
  -XX:MaxRAMFraction=1 -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 910.50M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM

Using -XX:MaxRAMFraction we are telling the JVM to use available memory/MaxRAMFraction as max heap. Using -XX:MaxRAMFraction=1 we are using almost all the available memory as max heap.

UPDATE: follow up for Java 10+ at Running a JVM in a Container Without Getting Killed II

Docker Registry with Let’s Encrypt Certificate

A one-liner to run a SSL Docker registry generating a Let’s Encrypt certificate.

This command will create a registry proxying the Docker hub, caching the images in a registry volume.

LetsEncrypt certificate will be auto generated and stored in the host dir as letsencrypt.json. You could also use a Docker volume to store it.

In order for the certificate generation to work the registry needs to be accessible from the internet in port 443. After the certificate is generated that’s no longer needed.

docker run -d -p 443:5000 --name registry \
  -v `pwd`:/etc/docker/registry/ \
  -v registry:/var/lib/registry \
  -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \
  -e REGISTRY_HTTP_HOST=https://docker.example.com \
  -e REGISTRY_HTTP_TLS_LETSENCRYPT_CACHEFILE=/etc/docker/registry/letsencrypt.json \
  -e REGISTRY_HTTP_TLS_LETSENCRYPT_EMAIL=admin@example.com \
  -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \
  registry:2

You can also create a config.yml in this dir and run the registry using the file instead of environment variables

version: 0.1
storage:
  filesystem:
http:
  addr: 0.0.0.0:5000
  host: https://docker.example.com
  tls:
    letsencrypt:
      cachefile: /etc/docker/registry/letsencrypt.json
      email: admin@example.com
proxy:
  remoteurl: https://registry-1.docker.io

Then run

docker run -d -p 443:5000 --name registry \
  -v `pwd`:/etc/docker/registry/ \
  -v registry:/var/lib/registry \
  registry:2

If you want to use this as a remote repository and not just for proxying, remove the proxy entry in the configuration

Running Docker containers as non root

Running containers as root is a bad practice, but many Docker images available in the Docker Hub have the user set to root by default, so what can we do about it?

TL;DR Use -u 65534 -w /tmp -e _JAVA_OPTIONS=-Duser.home=/tmp for a typical Java image, plus any tool specific environment variable needed

Option 1 (build time):

Create a new derived image that creates a new user and changes the default to that one


FROM openjdk:8-jdk
RUN useradd --create-home -s /bin/bash user
WORKDIR /home/user
USER user

This is simple, but forces us to republish all these derived images, creating a maintenance nightmare.

Option 2 (runtime):

Use docker run -u option to choose what user to run the container as

docker run -ti --rm -u 1000 openjdk:8-jdk

This may work, but we can hit some issues, let’s see

$docker run -ti --rm -u 1000 openjdk:8-jdk git clone https://github.com/jenkinsci/docker
fatal: could not create work tree dir 'docker'.: Permission denied

Well, we obviously don’t have permissions to write to the default workdir, let’s fix it using -w and a dir that is writable, for instance /tmp

$ docker run -ti --rm -u 1000 -w /tmp openjdk:8-jdk git clone https://github.com/jenkinsci/docker
Cloning into 'docker'...
remote: Counting objects: 1498, done.
remote: Total 1498 (delta 0), reused 0 (delta 0), pack-reused 1498
Receiving objects: 100% (1498/1498), 287.46 KiB | 0 bytes/s, done.
Resolving deltas: 100% (772/772), done.
Checking connectivity... done.
fatal: unable to look up current user in the passwd file: no such user

 

Git does not like being run as an user that does not exist, so we need to pick one of the existing users

UPDATE: git 2.6.5+ removes this requirement, so we can run as any user even if it is not in the passwd file. For previous versions setting the GIT_COMMITTER_NAME and GIT_COMMITTER_EMAIL environment variables also works.

$ docker run -ti --rm -u 1000 -w /tmp openjdk:8-jdk cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
systemd-timesync:x:100:103:systemd Time Synchronization,,,:/run/systemd:/bin/false
systemd-network:x:101:104:systemd Network Management,,,:/run/systemd/netif:/bin/false
systemd-resolve:x:102:105:systemd Resolver,,,:/run/systemd/resolve:/bin/false
systemd-bus-proxy:x:103:106:systemd Bus Proxy,,,:/run/systemd:/bin/false
messagebus:x:104:108::/var/run/dbus:/bin/false

The user nobody:65534 could be a good candidate, and as it is present in the default debian and alpine Docker images it will be present also in most images in the hub.

$ docker run -ti --rm -u 65534 -w /tmp openjdk:8-jdk \
git clone https://github.com/jenkinsci/docker
Cloning into 'docker'...
remote: Counting objects: 1498, done.
remote: Total 1498 (delta 0), reused 0 (delta 0), pack-reused 1498
Receiving objects: 100% (1498/1498), 287.46 KiB | 0 bytes/s, done.
Resolving deltas: 100% (772/772), done.
Checking connectivity... done.

Ok, that worked! Now let’s try to run something else, like a maven build


$ docker run -ti --rm -u 65534 -w /tmp maven:3 \
bash -c "git clone https://github.com/jenkinsci/kubernetes-plugin.git && \
cd kubernetes-plugin && mvn package"
touch: cannot touch ‘/root/.m2/copy_reference_file.log’: Permission denied
Can not write to /root/.m2/copy_reference_file.log. Wrong volume permissions?

This means entering in the domain of each tool and checking how to configure it. The maven docker image instructs us to use MAVEN_CONFIG and pass -Duser.home otherwise we would get an error [ERROR] Could not create local repository at /nonexistent/.m2/repository -> [Help 1]

Here is the working solution


$ docker run -ti --rm -u 65534 -w /tmp -e MAVEN_CONFIG=/tmp maven:3 \
bash -c "git clone https://github.com/jenkinsci/kubernetes-plugin.git && \
cd kubernetes-plugin && mvn -Duser.home=/tmp package"

Can we generalize this a bit for other Java apps? yes! By using the env var _JAVA_OPTIONS we can pass the user.home property to any Java app.

$ docker run -ti --rm -u 65534 -w /tmp -e _JAVA_OPTIONS=-Duser.home=/tmp \
-e MAVEN_CONFIG=/tmp maven:3 \
bash -c "git clone https://github.com/jenkinsci/kubernetes-plugin.git && \
cd kubernetes-plugin && mvn package"

 

Jenkins Kubernetes Plugin 0.10 Released

The 0.10 release is mostly a bugfix release

Changelog for 0.10:

  • Fixing checkbox serialization by jelly views #110
  • Do not throw exceptions in the test configuration page #107
  • Upgrade to the latest kubernetes-client version. #106
  • feat: make pipeline support instanceCap field #102
  • Instantiating Kubernetes Client with proper config in Container Steps #104
  • Fix NPE when slaves are read from disk #103
  • [JENKINS-39867] Upgrade fabric8 to 1.4.26 #101
  • The pod watcher now checks readiness of the right pod. #97
  • Fix logic for waitUntilContainerIsReady #95
  • instanceCap is not used in pipeline #92
  • Allow nesting of templates for inheritance. #94
  • Wait until all containers are in ready state before starting the slave #93
  • Adding basic retention for idle slaves, using the idleTimeout setting properly #91
  • Improve the inheritFrom functionality to better cover containers and volumes. #84
  • Fix null pointer exceptions. #89
  • fix PvcVolume jelly templates path #90
  • Added tool installations to the pod template. #85
  • fix configmap volume name #87
  • set the serviceAccount when creating new pods #86
  • Read and connect timeout are now correctly used to configure the client. #82
  • Fix nodeSelector in podTemplate #83
  • Use the client’s namespace when deleting a pod (fixes a regression preventing pods to delete). #81