Running Jenkins Pipelines in AWS Lambda

jenkins-lambdaThe Jenkinsfile-Runner-Lambda project is a AWS Lambda function to run Jenkins pipelines. It will process a GitHub webhook, git clone the repository and execute the Jenkinsfile in that git repository. It allows huge scalability with 1000+ concurrent builds and pay per use with zero cost if not used.

This function allows Jenkinsfile execution without needing a persistent Jenkins master running in the same way as Jenkins X Serverless, but using AWS Lambda instead of Kubernetes. All the logs are stored in AWS CloudWatch and are easily accessible.

Why???

Why not?

I mean, it could make sense to run Jenkinsfiles in Lambda when you are building AWS related stuff, like creating an artifact and uploading it to S3.

Limitations

Lambda limitations:

  • 15 minutes execution time
  • 3008MB of memory
  • git clone and generated artifacts must fit in the 500MB provided

Current implementation limitations:

  • checkout scm does not work, change it to sh 'git clone https://github.com/carlossg/jenkinsfile-runner-lambda-example.git'
  • Jenkinsfile must add /usr/local/bin to PATH and use /tmp for any tool that needs writing files, see the example

Extending

Three lambda layers are created:

  • jenkinsfile-runner: the main library
  • plugins: minimal set of plugins to build a Jenkinsfile
  • tools: git, openjdk, maven

You can add your plugins in a new layer as a zip file inside a plugins dir to be expanded in /opt/plugins. You could also add the Configuration as Code plugin and configure the Artifact Manager S3 to store all your artifacts in S3.

Other tools can be added as new layers, and they will be expanded in /opt. You can find a list of scripts for inspiration in the lambci project (gcc,go,java,php,python,ruby,rust) and bash, git and zip (git is already included in the tools layer here)

The layers are built with Docker, installing jenkinsfile-runner, tools and plugins under /opt which is where Lambda layers are expanded. These files are then zipped for upload to Lambda.

Installation

Create a lambda function jenkinsfile-runner using Java 8 runtime. Use the layers built in target/layer-* and target/jenkinsfile-runner-lambda-*.jar as function. Could use make publish to create them.

Set

  • handler: org.csanchez.jenkins.lambda.Handler::handleRequest
  • memory: 1024MB
  • timeout: 15 minutes
aws lambda create-function \
    --function-name jenkinsfile-runner \
    --handler org.csanchez.jenkins.lambda.Handler::handleRequest \
    --zip-file fileb://target/jenkinsfile-runner-lambda-1.0-SNAPSHOT.jar \
    --runtime java8 \
    --region us-east-1 \
    --timeout 900 \
    --memory-size 1024 \
    --layers output/layers.json

Exposing the Lambda Function

From the lambda function configuration page add a API Gateway trigger. Select Create a new API and choose the security level. Save the function and you will get a http API endpoint.

Note that to achieve asynchronous execution (GitHub webhooks execution will time out if your webhook takes too long) you would need to configure API Gateway to send the payload to SNS and then lambda to listen to SNS events. See an example.

GitHub events

Add a GitHub json webhook to your git repo pointing to the lambda api gateway url.

 

More information in the Jenkinsfile-Runner-Lambda GitHub page.

Progressive Delivery with Jenkins X

kubernetes

This is the second post in a Progressive Delivery series, see the first one, Progressive Delivery in Kubernetes: Blue-Green and Canary Deployments.

I have evaluated three Progressive Delivery options for Canary and Blue-Green deployments with Jenkins X, using my Croc Hunter example project.

  • Shipper enables blue-green and multi cluster deployments for the Helm charts built by Jenkins X, but has limitations on what are the contents of the chart. You could do blue-green between staging and production environments.
  • Istio allows to send a percentage of the traffic to staging or preview environments by just creating a VirtualService.
  • Flagger builds on top of Istio and adds canary deployment, with automated roll out and roll back based on metrics. Jenkins X promotions to the production environment can automatically be canary-enabled for a graceful roll out by creating a Canary object.

Find the example code for Shipper, Istio and Flagger.

Shipper

Because Shipper has multiple limitations on the Helm charts created I had to make some changes to the app. Also Jenkins X only builds the Helm package from master so we can’t do rollouts of PRs, only the master branch.

The app label can’t include the release name, ie. app: {{ template “fullname” . }} won’t work, need something like app: {{ .Values.appLabel }}

App rollout failed with the Jenkins X generated charts due to a generated templates/release.yaml, probably a conflict with jenkins.io/releases CRD.

Chart croc-hunter-jenkinsx-0.0.58 failed to render:
could not decode manifest: no kind "Release" is registered for version "jenkins.io/v1"

We just need to change jx step changelog to jx step changelog –generate-yaml=false so the file is not generated.

In multi cluster, it needs to use public urls for both chartmuseum and docker registry in the shipper application yaml so the other clusters can find the management cluster services to download the charts.

Istio

We can create this Virtual Service to send 1% of the traffic to a Jenkins X preview environment (for PR number 35), for all requests coming to the Ingress Gateway for host croc-hunter.istio.example.org

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
 name: croc-hunter-jenkinsx
 namespace: jx-production
spec:
 gateways:
 - public-gateway.istio-system.svc.cluster.local
 - mesh
 hosts:
 - croc-hunter.istio.example.com
 http:
 - route:
   - destination:
       host: croc-hunter-jenkinsx.jx-production.svc.cluster.local
       port:
         number: 80
     weight: 99
   - destination:
       host: croc-hunter-jenkinsx.jx-carlossg-croc-hunter-jenkinsx-serverless-pr-35.svc.cluster.local
       port:
         number: 80
     weight: 1

Flagger

We can create a Canary object for the chart deployed by Jenkins X in the jx-production namespace, and all new Jenkins X promotions to jx-production will automatically be rolled out 10% at a time and automatically rolled back if anything fails.

apiVersion: flagger.app/v1alpha2
kind: Canary
metadata:
  # canary name must match deployment name
  name: jx-production-croc-hunter-jenkinsx
  namespace: jx-production
spec:
  # deployment reference
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: jx-production-croc-hunter-jenkinsx
  # HPA reference (optional)
  # autoscalerRef:
  #   apiVersion: autoscaling/v2beta1
  #   kind: HorizontalPodAutoscaler
  #   name: jx-production-croc-hunter-jenkinsx
  # the maximum time in seconds for the canary deployment
  # to make progress before it is rollback (default 600s)
  progressDeadlineSeconds: 60
  service:
    # container port
    port: 8080
    # Istio gateways (optional)
    gateways:
    - public-gateway.istio-system.svc.cluster.local
    # Istio virtual service host names (optional)
    hosts:
    - croc-hunter.istio.example.com
  canaryAnalysis:
    # schedule interval (default 60s)
    interval: 15s
    # max number of failed metric checks before rollback
    threshold: 5
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 50
    # canary increment step
    # percentage (0-100)
    stepWeight: 10
    metrics:
    - name: istio_requests_total
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      threshold: 99
      interval: 1m
    - name: istio_request_duration_seconds_bucket
      # maximum req duration P99
      # milliseconds
      threshold: 500
      interval: 30s

Progressive Delivery in Kubernetes: Blue-Green and Canary Deployments

kubernetesProgressive Delivery is the next step after Continuous Delivery, where new versions are deployed to a subset of users and are evaluated in terms of correctness and performance before rolling them to the totality of the users and rolled back if not matching some key metrics.

There are some interesting projects that make this easier in Kubernetes, and I’m going to talk about three of them that I took for a spin with a Jenkins X example project: Shipper, Istio and Flagger.

Shipper

Shipper is a project from booking.com extending Kubernetes to add sophisticated rollout strategies and multi-cluster orchestration (docs). It supports deployments from one to multiple clusters, and allows multi-region deployments.

Shipper is installed with a cli shipperctl, that pushes the configuration of the different clusters to manage. Note this issue with GKE contexts.

Shipper uses Helm packages for deployment but they are not installed with Helm, they won’t show in helm list. Also, deployments must be version apps/v1 or shipper will not edit the deployment to add the right labels and replica count.

Rollouts with Shipper are all about transitioning from an old Release, the incumbent, to a new Release, the contender. This is achieved by creating a new Application object that defines the n stages that the deployment goes through. For example for a 3 step process:

  1. Staging: Deploy the new version to one pod, with no traffic
  2. 50/50: Deploy the new version to 50% of the pods and 50% of the traffic
  3. Full on: Deploy the new version to all the pods and all the traffic
   strategy:
     steps:
     - name: staging
       capacity:
         contender: 1
         incumbent: 100
       traffic:
         contender: 0
         incumbent: 100
     - name: 50/50
       capacity:
         contender: 50
         incumbent: 50
       traffic:
         contender: 50
         incumbent: 50
     - name: full on
       capacity:
         contender: 100
         incumbent: 0
       traffic:
         contender: 100
         incumbent: 0

If a step in the release does not send traffic to the pods they can be accessed with kubectl port-forward, ie. kubectl port-forward mypod 8080:8080, which is useful for testing before users can see the new version.

Shipper supports the concept of multiple clusters, but treats all clusters the same way, only using regions and filter by capabilities (set in the cluster object), so there’s no option to have dev, staging, prod clusters with just one Application object. But we could have two application objects

  • myapp-staging deploys to region “staging
  • myapp deploys to other regions

In GKE you can easily configure a multi cluster ingress that will expose the service running in multiple clusters and serve from the cluster closest to your location.

Limitations

The main limitations in Shipper:

  • Chart restrictions: The Chart must have exactly one Deployment object. The name of the Deployment should be templated with {{.Release.Name}}. The Deployment object should have apiVersion: apps/v1.
  • Pod-based traffic shifting: there is no way to have fine grained traffic routing, ie. send 1% of the traffic to the new version, it is based on the number of pods running.
  • New Pods don’t get traffic if Shipper is not working

Istio

Istio is not a deployment tool but a service mesh. However it is interesting as it has become very popular and allows traffic management, for example sending a percentage of the traffic to a different service and other advanced networking.

In GKE it can be installed by just checking the box to enable Istio in the cluster configuration. In other clusters it can be installed manually or with Helm.

With Istio we can create a Gateway that processes all external traffic through the Ingress Gateway and create VirtualServices that manage the routing to our services. In order to do that just find the ingress gateway ip address and configure a wildcard DNS for it. Then create the Gateway that will route all external traffic through the Ingress Gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
 name: public-gateway
 namespace: istio-system
spec:
 selector:
   istio: ingressgateway
 servers:
 - port:
     number: 80
     name: http
     protocol: HTTP
   hosts:
   - "*"

Istio does not manage the app lifecycle just the networking. We can create a Virtual Service to send 1% of the traffic to the service deployed in a pull request or in the master branch, for all requests coming to the Ingress Gateway.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
 name: croc-hunter-jenkinsx
 namespace: jx-production
spec:
 gateways:
 - public-gateway.istio-system.svc.cluster.local
 - mesh
 hosts:
 - croc-hunter.istio.example.org
 http:
 - route:
   - destination:
       host: croc-hunter-jenkinsx.jx-production.svc.cluster.local
       port:
         number: 80
     weight: 99
   - destination:
       host: croc-hunter-jenkinsx.jx-staging.svc.cluster.local
       port:
         number: 80
     weight: 1

Flagger

Flagger is a project sponsored by WeaveWorks using Istio to automate canarying and rollbacks using metrics from Prometheus. It goes beyond what Istio provides to automate progressive rollouts and rollbacks based on metrics.

Flagger requires Istio installed with Prometheus, Servicegraph and configuration of some systems, plus the installation of the Flagger controller itself. It also offers a Grafana dashboard to monitor the deployment progress.

grafana-canary-analysis

The deployment rollout is defined by a Canary object that will generate primary and canary Deployment objects. When the Deployment is edited, for instance to use a new image version, the Flagger controller will shift the loads from 0% to 50% with 10% increases every minute, then it will shift to the new deployment or rollback if metrics such as response errors and request duration fail.

Comparison

This table summarizes the strengths and weaknesses of both Shipper and Flagger in terms of a few Progressive Delivery features.

Shipper Flagger
Traffic routing Bare k8s balancing as % of pods Advanced traffic routing with Istio (% of requests)
Deployment progress UI No Grafana Dashboard
Deployments supported Helm charts with strong limitations Any deployment
Multi cluster deployment Yes No
Canary or blue/green in different namespace (ie. jx-staging and jx-production) No No, but the VirtualService could be manually edited to do it
Canary or blue/green in different cluster Yes, but with a hack, using a new Application and link to a new “region” Maybe with Istio multicluster ?
Automated rollout No, operator must manually go through the steps Yes, 10% traffic increase every minute, configurable
Automated rollback No, operator must detect error and manually go through the steps Yes, based on Prometheus metrics
Requirements None Istio, Prometheus
Alerts Slack

To sum up, I see Shipper’s value on multi-cluster management and simplicity, not requiring anything other than Kubernetes, but it comes with some serious limitations.

Flagger really goes the extra mile automating the rollout and rollback, and fine grain control over traffic, at a higher complexity cost with all the extra services needed (Istio, Prometheus).

Find the example code for Shipper, Istio and Flagger.

Jenkins Kubernetes Plugin: 2018 in Review

kubernetesLast year has been quite prolific for the Jenkins Kubernetes Plugin, with 55 releases and lots of external contributions!

In 2019 there will be a push for Serverless Jenkins and with that a shift to make agents work better in a Kubernetes environment, with no persistent jnlp connections. You can watch my Jenkins X and Serverless Jenkins demo at Kubecon.

Main changes in the Kubernetes plugin in 2018:

  • Allow creating Pod templates from yaml. This allows setting all possible fields in Kubernetes API using yaml
  • Add yamlFile option for Declarative agent to read yaml definition from a different file
  • Support multiple containers in declarative pipeline
  • Support passing kubeconfig file as credentials using secretFile credentials
  • Show pod logs and events in the Jenkins node page
  • Add optional usage restriction for a Kubernetes cloud using folder properties
  • Add Pod Retention policies to keep pods around on failure
  • Validate label and container names with regex
  • Add option to apply caps only on alive pods
  • Split credentials classes into new plugin kubernetes-credentials

Full Changelog

2018-12-31 kubernetes-1.14.2
2018-12-24 kubernetes-1.14.1
2018-12-19 kubernetes-1.14.0
2018-12-19 kubernetes-1.13.9
2018-12-13 kubernetes-1.13.8
2018-11-30 kubernetes-1.13.7
2018-11-23 kubernetes-1.13.6
2018-10-31 kubernetes-1.13.5
2018-10-30 kubernetes-1.13.4
2018-10-30 kubernetes-1.13.3
2018-10-24 kubernetes-1.13.2
2018-10-23 kubernetes-1.13.1
2018-10-19 kubernetes-1.13.0
2018-10-17 kubernetes-1.12.9
2018-10-17 kubernetes-1.12.8
2018-10-11 kubernetes-1.12.7
2018-09-07 kubernetes-1.12.6
2018-09-07 kubernetes-1.12.5
2018-08-28 kubernetes-1.12.4
2018-08-09 kubernetes-1.12.3
2018-08-07 kubernetes-1.12.2
2018-08-06 kubernetes-1.12.1
2018-07-31 kubernetes-1.12.0
2018-07-31 kubernetes-1.11.0
2018-07-23 kubernetes-1.10.2
2018-07-16 kubernetes-1.10.1
2018-07-11 kubernetes-1.10.0
2018-07-11 kubernetes-1.9.3
2018-06-26 kubernetes-1.9.2
2018-06-26 kubernetes-1.9.1
2018-06-26 kubernetes-1.9.0
2018-06-22 kubernetes-1.8.4
2018-06-22 kubernetes-1.8.3
2018-06-19 kubernetes-1.8.2
2018-06-13 kubernetes-1.8.1
2018-06-13 kubernetes-1.8.0
2018-05-30 kubernetes-1.7.1
2018-05-30 kubernetes-1.7.0
2018-05-29 kubernetes-1.6.4
2018-05-25 kubernetes-1.6.3
2018-05-23 kubernetes-1.6.2
2018-05-22 kubernetes-1.6.1
2018-04-25 kubernetes-1.6.0
2018-04-16 kubernetes-1.5.2
2018-04-09 kubernetes-1.5.1
2018-04-01 kubernetes-1.5
2018-03-28 kubernetes-1.4.1
2018-03-21 kubernetes-1.4
2018-03-16 kubernetes-1.3.3
2018-03-07 kubernetes-1.3.2
2018-02-21 kubernetes-1.3.1
2018-02-21 kubernetes-1.3
2018-02-16 kubernetes-1.2.1
2018-02-02 kubernetes-1.2
2018-01-29 kubernetes-1.1.4
2018-01-10 kubernetes-1.1.3