Serverless Jenkins Pipelines with Google Cloud Run

jenkins-google-cloud-run

Jenkinsfile-Runner-Google-Cloud-Run project is a Google Cloud Run (a container native, serverless platform) Docker image to run Jenkins pipelines. It will process a GitHub webhook, git clone the repository and execute the Jenkinsfile in that git repository. It allows high scalability and pay per use with zero cost if not used.

This image allows Jenkinsfile execution without needing a persistent Jenkins master running in the same way as Jenkins X Serverless, but using the Google Cloud Run platform instead of Kubernetes.

Google Cloud Run vs Project Fn vs AWS Lambda

I wrote three flavors of Jenkinsfile Runner

The image is similar to the other ones. The main difference between Lambda and Google Cloud Run is in the packaging, as Lambda layers are limited in size and are expanded in /opt while Google Cloud Run allows any custom Dockerfile where you can install whatever you want in a much easier way.

This image is extending the Jenkinsfile Runner image instead of doing a Maven build with it as a dependency as it simplifies classpath magement.

Limitations

Max build duration is 15 minutes but we can use a timeout value up tos 60 minutes by using gcloud beta.

Current implementation limitations:

  • checkout scm does not work, change it to sh 'git clone https://github.com/carlossg/jenkinsfile-runner-example.git'

Example

See the jenkinsfile-runner-example project for an example.

When the PRs are built Jenkins writes a comment back to the PR to show status, as defined in the Jenkinsfile, and totally customizable.

Check the PRs at carlossg/jenkinsfile-runner-example

Extending

You can add your plugins to plugins.txt. You could also add the Configuration as Code plugin for configuration, example at jenkins.yaml.

Other tools can be added to the Dockerfile.

Installation

GitHub webhooks execution will time out if the call takes too long, so we also create a nodejs Google function (index.js) that forwards the request to Google Cloud Run and returns the response to GitHub while the build runs.

Building

Build the package

mvn verify 
docker build -t jenkinsfile-runner-google-cloud-run .

Publishing

Both the function and the Google Cloud Run need to be deployed.

Set GITHUB_TOKEN_JENKINSFILE_RUNNER to a token that allows posting PR comments. A more secure way would be to use Google Cloud Secret Manager.

export GITHUB_TOKEN_JENKINSFILE_RUNNER=... 
PROJECT_ID=$(gcloud config get-value project 2> /dev/null) 
make deploy

Note the function url and use it to create a GitHub webhook of type json.

Execution

To test the Google Cloud Run execution

URL=$(gcloud run services describe jenkinsfile-runner \ 
  --platform managed \ 
  --region us-east1 \ 
  --format 'value(status.address.url)') 

curl -v -H "Content-Type: application/json" ${URL}/handle \
  -d @src/test/resources/github.json

Logging

gcloud logging read \
  "resource.type=cloud_run_revision AND resource.labels.service_name=jenkinsfile-runner" \ 
  --format "value(textPayload)" --limit 100

or

gcloud alpha logging tail \
  "resource.type=cloud_run_revision AND resource.labels.service_name=jenkinsfile-runner" \ 
  --format "value(textPayload)"

GitHub events

Add a GitHub json webhook to your git repo pointing to the Google Cloud Function url than you can get with

gcloud functions describe jenkinsfile-runner-function \
  --format 'value(httpsTrigger.url)'

Testing

The image can be run locally

docker run -ti --rm -p 8080:8080 \
  -e GITHUB_TOKEN=${GITHUB_TOKEN_JENKINSFILE_RUNNER} \
  jenkinsfile-runner-google-cloud-run
curl -v -H "Content-Type: application/json" \
  -X POST http://localhost:8080/handle \
  -d @src/test/resources/github.json

More information in the Jenkinsfile-Runner-Google-Cloud-Run GitHub page.

Building Docker Images with Kaniko Pushing to Google Container Registry (GCR)

To push to Google Container Registry (GCR) we need to login to Google Cloud and mount our local $HOME/.config/gcloud containing our credentials into the kaniko container so it can push to GCR. GCR does support caching and so it will push the intermediate layers to gcr.io/$PROJECT/kaniko-demo/cache:_some_large_uuid_ to be reused in subsequent builds.

git clone https://github.com/carlossg/kaniko-demo.git
cd kaniko-demo

gcloud auth application-default login # get the Google Cloud credentials
PROJECT=$(gcloud config get-value project 2> /dev/null) # Your Google Cloud project id
docker run \
    -v $HOME/.config/gcloud:/root/.config/gcloud:ro \
    -v `pwd`:/workspace \
    gcr.io/kaniko-project/executor:v1.0.0 \
    --destination gcr.io/$PROJECT/kaniko-demo:kaniko-docker \
    --cache

kaniko can cache layers created by RUN commands in a remote repository. Before executing a command, kaniko checks the cache for the layer. If it exists, kaniko will pull and extract the cached layer instead of executing the command. If not, kaniko will execute the command and then push the newly created layer to the cache.

We can see in the output how kaniko uploads the intermediate layers to the cache.

INFO[0001] Resolved base name golang to build-env
INFO[0001] Retrieving image manifest golang
INFO[0001] Retrieving image golang
INFO[0004] Retrieving image manifest golang
INFO[0004] Retrieving image golang
INFO[0006] No base image, nothing to extract
INFO[0006] Built cross stage deps: map[0:[/src/bin/kaniko-demo]]
INFO[0006] Retrieving image manifest golang
INFO[0006] Retrieving image golang
INFO[0008] Retrieving image manifest golang
INFO[0008] Retrieving image golang
INFO[0010] Executing 0 build triggers
INFO[0010] Using files from context: [/workspace]
INFO[0011] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f...
INFO[0011] No cached layer found for cmd RUN cd /src && make
INFO[0011] Unpacking rootfs as cmd ADD . /src requires it.
INFO[0051] Using files from context: [/workspace]
INFO[0051] ADD . /src
INFO[0051] Taking snapshot of files...
INFO[0051] RUN cd /src && make
INFO[0051] Taking snapshot of full filesystem...
INFO[0061] cmd: /bin/sh
INFO[0061] args: [-c cd /src && make]
INFO[0061] Running: [/bin/sh -c cd /src && make]
CGO_ENABLED=0 go build -ldflags '' -o bin/kaniko-demo main.go
INFO[0065] Taking snapshot of full filesystem...
INFO[0070] Pushing layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f to cache now
INFO[0144] Saving file src/bin/kaniko-demo for later use
INFO[0144] Deleting filesystem...
INFO[0145] No base image, nothing to extract
INFO[0145] Executing 0 build triggers
INFO[0145] cmd: EXPOSE
INFO[0145] Adding exposed port: 8080/tcp
INFO[0145] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d...
INFO[0146] No cached layer found for cmd COPY --from=build-env /src/bin/kaniko-demo /
INFO[0146] Unpacking rootfs as cmd COPY --from=build-env /src/bin/kaniko-demo / requires it.
INFO[0146] EXPOSE 8080
INFO[0146] cmd: EXPOSE
INFO[0146] Adding exposed port: 8080/tcp
INFO[0146] No files changed in this command, skipping snapshotting.
INFO[0146] ENTRYPOINT ["/kaniko-demo"]
INFO[0146] No files changed in this command, skipping snapshotting.
INFO[0146] COPY --from=build-env /src/bin/kaniko-demo /
INFO[0146] Taking snapshot of files...
INFO[0146] Pushing layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d to cache now

If we run kaniko twice we can see how the cached layers are pulled instead of rebuilt.

INFO[0001] Resolved base name golang to build-env
INFO[0001] Retrieving image manifest golang
INFO[0001] Retrieving image golang
INFO[0004] Retrieving image manifest golang
INFO[0004] Retrieving image golang
INFO[0006] No base image, nothing to extract
INFO[0006] Built cross stage deps: map[0:[/src/bin/kaniko-demo]]
INFO[0006] Retrieving image manifest golang
INFO[0006] Retrieving image golang
INFO[0008] Retrieving image manifest golang
INFO[0008] Retrieving image golang
INFO[0010] Executing 0 build triggers
INFO[0010] Using files from context: [/workspace]
INFO[0010] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:0ab16b2e8a90e3820282b9f1ef6faf5b9a083e1fbfe8a445c36abcca00236b4f...
INFO[0012] Using caching version of cmd: RUN cd /src && make
INFO[0012] Unpacking rootfs as cmd ADD . /src requires it.
INFO[0049] Using files from context: [/workspace]
INFO[0049] ADD . /src
INFO[0049] Taking snapshot of files...
INFO[0049] RUN cd /src && make
INFO[0049] Found cached layer, extracting to filesystem
INFO[0051] Saving file src/bin/kaniko-demo for later use
INFO[0051] Deleting filesystem...
INFO[0052] No base image, nothing to extract
INFO[0052] Executing 0 build triggers
INFO[0052] cmd: EXPOSE
INFO[0052] Adding exposed port: 8080/tcp
INFO[0052] Checking for cached layer gcr.io/api-project-642841493686/kaniko-demo/cache:6ec16d3475b976bd7cbd41b74000c5d2543bdc2a35a635907415a0995784676d...
INFO[0054] Using caching version of cmd: COPY --from=build-env /src/bin/kaniko-demo /
INFO[0054] Skipping unpacking as no commands require it.
INFO[0054] EXPOSE 8080
INFO[0054] cmd: EXPOSE
INFO[0054] Adding exposed port: 8080/tcp
INFO[0054] No files changed in this command, skipping snapshotting.
INFO[0054] ENTRYPOINT ["/kaniko-demo"]
INFO[0054] No files changed in this command, skipping snapshotting.
INFO[0054] COPY --from=build-env /src/bin/kaniko-demo /
INFO[0054] Found cached layer, extracting to filesystem

In Kubernetes

To deploy to GCR we can use a service account and mount it as a Kubernetes secret, but when running on Google Kubernetes Engine (GKE) it is more convenient and safe to use the node pool service account.

When creating the GKE node pool the default configuration only includes read-only access to Storage API, and we need full access in order to push to GCR. This is something that we need to change under Add a new node pool – Security – Access scopes – Set access for each API – Storage – Full. Note that the scopes cannot be changed once the node pool has been created.

If the nodes have the correct service account with full storage access scope then we do not need to do anything extra on our kaniko pod, as it will be able to push to GCR just fine.

PROJECT=$(gcloud config get-value project 2> /dev/null)

cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: kaniko-gcr
spec:
  restartPolicy: Never
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:v1.0.0
    imagePullPolicy: Always
    args: ["--dockerfile=Dockerfile",
            "--context=git://github.com/carlossg/kaniko-demo.git",
            "--destination=gcr.io/${PROJECT}/kaniko-demo:latest",
            "--cache=true"]
    resources:
      limits:
        cpu: 1
        memory: 1Gi
EOF