Anatomy of a DevOps Orchestration Engine: (III) Agents

MaestroDev logoPreviously: (II) Architecture

In Maestro we typically use a Maestro master server and multiple Maestro agents. Each Maestro Agent is just a small service where the actual work happens, it processes the work sent by the master, via ActiveMQ, and executes the plugins with the data received.

Architecture

The two main goals of the agent are load distribution and heterogeneous composition support. The more agents running, the more compositions that can be executed in parallel, and compositions can target specific agents based on its features, such as architecture, operating system,… which is a must for development environments. For simplicity each agent can only run one composition at a time, but you could have multiple agent processes running in a single server.

It uses Puppet Facter to gather the machine facts (operating system, memory size, cloud provider data,…) and sends all that information to the master, that can use it to filter what compositions run in the agent. For instance I may want to run a composition in a Windows agent, or in an agent that has some specific piece of software installed. Facter supports external facts so it is really easy to add new filtering capabilities, and not be just limited to what Facter provides out of the box. A small text file can be added to /etc/facter/facts.d/ and Facter would report it to the master server.

Agents are installed alongside with all the tools that may be needed, from Git, to clone repos, to Jenkins swarm to reuse the agents as Jenkins slaves, or mcollective agents to allow updating the agent itself automatically with Puppet when new manifests are deployed to the Puppet master. In our internal environment any commit to Puppet manifests or modules automatically trigger our rspec-puppet tests, the deployment of those manifests to the Puppet master, and a cascading Puppet update of all the machines in our staging environment using MCollective. All our Puppet modules are likewise built and tested on each commit and a new version published to the Puppet Forge automatically using rspec-puppet and Puppet Blacksmith.

Maestro also supports manually assigning agents to pools, and matching compositions with agent pools, so compositions can be limited to run in a predefined set of agents.

The agent process is written in Ruby and runs under JRuby in the JVM, thus supporting multiple operating systems and architectures, and the ability to write extensions in Java or Ruby easily. It connects to the master’s Composition Execution Engine through ActiveMQ using STOMP for messaging.

Plugins

Plugins are small pieces of code written in Java or Ruby that run in the agent to execute the actual work. We have made all plugins available in GitHub so they can be used as examples to create new plugins for custom tasks.

Plugins can be added to Maestro at runtime and automatically show up in the composition editor. The plugin manifest defines the plugin images, what tasks are defined, and what fields in each task. Based on the workload received, the agent downloads and executes the plugin, which just accesses the fields in the workload and do the actual work, whatever it might be, sending output back to LuCEE and populating the composition context.

For instance the Fog plugin can manage multiple clouds, such as EC2, where it can start and stop instances. The plugin receives the fields defined in the composition (credentials, image id,…), calls the EC2 API, streams the status to the Maestro output (successfully created, instance id,…) and puts some data (ids of the instances created, public ips,…) in the composition context for other tasks to use. All of that in less than 100 lines of code.

The context is important to avoid redefining field values and provide some meaningful defaults, so if you have a provision task and a deprovision task, the values in the the latter are inherited from the former.

Agent cloud manager

The agent cloud manager is a service that runs on Google Compute Engine and watches a number of Maestro installations to provide automatic agent scaling. Based on preconfigured parameters such as min/max number of agents for each agent pool, max waiting time,… and the current status of each agent pool queue, the service can start new machines from specific images, suspend them (destroy the instance but keep the disk), or completely destroy them.

We are also giving a try to Docker instead of using full vms and have created a couple interesting Docker images on CentOS for developers, a Jenkins swarm slave image and a build agent image that includes everything we use at development: Java, Ant, Maven, RVM (with 1.9, 2.0, 2.1, JRuby), Git, Svn, all configurable with credentials at runtime.

Anatomy of a DevOps Orchestration Engine: (II) Architecture

MaestroDev logo

Previously: (I) Workflow

Maestro architecture is basically defined by a master server and multiple agents, written in Java and Ruby (JRuby) for the backend and JavaScript for the frontend using AngularJS, and integrating several open source services. It is quite heterogeneous, with multiple languages, build tools, packages,… using the best tool for the job in each part of the stack.

Architecture

Master

The master services include

  • Maestro REST API
  • End user web interface
  • Composition Execution Engine (LuCEE)
  • ActiveMQ for STOMP messaging
  • PostgreSQL (or MySQL)
  • MongoDB

Maestro REST API

The REST API is a webapp written in Java, using Spring, packaged with a Jetty server. It is documented with Swagger annotations that generate a really nice web interface automatically that allows trying all the operations from the browser.

It handles caching, security, based on LDAP or database records, and delegates to the Composition Execution Engine (LuCEE) typically through LuCEE REST API but also via STOMP messaging to avoid continuous polling.

It also implements handlers to execute compositions from Github, Git, SVN,… on commit callbacks.

End user web interface

The end user UI is written in AngularJS using the AngularJS Bootstrap components and Less stylesheets. It connects to the REST API, so everything that can be done through the webapp can also be automated using the REST API (automation, automation, automation!). I have found Angular really nice to work with besides the service, factory, provider,… complicated abstractions, with good modularity and the ability to reuse third party plugins.

Built with Maven and Grunt (better for the Javascript parts), using Bower to manage all the Javascript dependencies (angular core, bootstrap, ladda button spinner,…), and Karma + PhantomJS, for headless UI tests without needing a real browser.

Composition Execution Engine (LuCEE)

LuCEE is a webapp that manages the execution of compositions, sending/receiving work to/from the agents through ActiveMQ STOMP queues, and storing state in the PostgreSQL database. LuCEE uses the Ruote workflow engine for work scheduling, and manages the compositions queue and agent routing, so basically checks what compositions need to be executed and decides in what agent to execute them, based on composition requirements, free agents, and other factors ie. prioritizing previously used agents that would likely have a cached copy of sources and dependencies to speed things up.

It is written in Ruby, it was quick to implement a first version, with a simple REST API using Sinatra and a STOMP connector to send messages to the Maestro REST webapp through ActiveMQ.

It is packaged as a JRuby war with Warbler, and both LuCEE and the REST API wars are run in the same Jetty server, all packaged as an RPM for easier deployment.

ActiveMQ

ActiveMQ handles all the comunication between LuCEE, the REST API webapp, and the agents using multiple STOMP queues. All the comunication between LuCEE and agents such as workloads, agent output, agent status,… is sent over a queue so it can be easily scaled across a high number of agents.

LuCEE also pushes changes in the database to the REST API webapp so it can update the caches without needing continuous polling.

PostgreSQL

LuCEE uses PostgreSQL (or MySQL or any other SQL database using Ruby Datamapper) as main storage to save compositions, projects, tasks,… The SQL database is also used by the REST API webapp to store permissions and user data when not using LDAP.

MongoDB

We found that in order to do more complex dashboards and reports we needed to store all sort of unstructured data from the plugins, from run time or status to anything that a plugin developer may want such as GitHub payload data received or test stacktrace. That data is sent by the agents to LuCEE and then stored in MongoDB, and can be queried directly (all your data belong to you) or through a reporting pane in the webapp.

Next: (III) Agents

Anatomy of a DevOps Orchestration Engine: (I) Workflow

MaestroDev logoAt MaestroDev we have been building what may be called, for lack of a better name, a DevOps Orchestration Engine, and is long overdue to talk about what we have been doing there and most importantly, how.

The basics of the application is to tie together the different systems involved in a Continuous Delivery cycle: Continuous Integration server, SCM, build tools, packaging tools, cloud resources, notification systems,… and streamline the process through these different tools. So it hooks into a bunch of popular tools to orchestrate interactions between them, an example:

Screen Shot 2014-07-11 at 11.20.12 AM

This workflow, or as we call it, composition, will

  1. download a war file from a Maven repository (previously built by Jenkins)
  2. start an Amazon EC2 instance with Tomcat preinstalled
  3. deploy the war
  4. checkout the acceptance tests from Git
  5. run some tests with Maven (Selenium tests using SauceLabs) against that instance
  6. wait for an user to confirm before moving to the next step (to record the human approval or to do some extra manual tests if needed)
  7. destroy the Amazon EC2 instance

Maestro provides a nice web UI that gives visibility over the composition execution and an aggregated log from all the tools that run during the composition in a single place.

Screen Shot 2014-07-15 at 10.42.42 AM

 

But the power comes with the combination of compositions together, as there are tasks for typical flows, such as running forking and joining compositions, call another composition in case of a failure, or waiting for a composition to finish.

Screen Shot 2014-07-11 at 11.19.54 AM

Here we have a more complex setup with five compositions tied together.

  • * – A composition that calls compositions 1 and 2.
  • 1 – A Jenkins build
  • 2 – The acceptance tests composition mentioned before
  • 2a – Notification composition in case the acceptance tests fail
  • 3 – Deployment to production

So you can see that compositions are not just limited to build, test, deploy. The tasks can be combined as needed to build your specific process.

Tasks are contributed by plugins, easily written in Ruby or Java, and define what fields are needed in the UI and what to do with those fields and the composition context. Maestro includes a lot of prebuilt tasks, publicly available on GitHub, from executing shell scripts to Jenkins job creation or Amazon Route 53 record management, but anything.

All the tasks share a common context and use sensible defaults, so if the scm checkout path is not defined it creates a specific working directory for the composition, and that is reused by the Maven, Ant,… plugins to avoid copying and pasting the fields. That’s also how a EC2 deprovision task doesn’t need any configuration if there was a provision task before in the composition, it will just deprovision those instances started previously in the composition by default.

You can take a look at our Maestro public instance, showing some examples and builds of public projects, mostly Puppet modules that are automatically built and deployed to the Puppet Forge, and Maestro plugins build and release compositions. In next posts I’ll be talking about the technologies used and distributed architecture of Maestro.

Next: (II) Architecture

InterOp New York and ApacheCON Atlanta

One week ago I was at InterOp New York, where we announced the release of Maestro 3, and talked to people attending the conference and analysts on the product and our ideas behind it, and we got some coverage on Maestro 3, including a video interview with InformationWeek, which covers some of the ideas behind the product, not all of them given the format and duration of the interview, but I thought it was worth posting it anyway.

We are doing webinars this Wednesday 3rd and 10th, showcasing how it may help organizations improve their build-test-release-deploy process, so if you are interested you can just check the times and sign up on our Build Through Deploy in a Single Interface – Introduction to Maestro 3 page.

Tomorrow I’ll be in Atlanta for ApacheCON, along with Brett Porter that is doing a training on Maven today. I’ll be at the BarCamp on Tuesday and the Maven Meetup on Wednesday for sure. BTW BarCamp and Meetups are free for everybody, so if you are in the Atlanta area you can just come by even if you don’t attend the show.

At Interop NYC, Maestro Dev showcased its latest Maestro 3 app dev choreography environment — a cloud-based solution that stitches together the discreet steps (build, test, deploy, etc.) of application development, using existing tools for each step

Vodpod videos no longer available.

Cloud Computing opportunities in the development (build-test-deploy) space

You heard the word “cloud” everywhere, running applications on the cloud, scaling with the cloud,… but not so often from the development lifecycle perspective: code, commit, test, deploy to QA, release, etc but it brings fundamental changes to this aspect too.

The scenario

If you belong to, or manage, a group of developers, you are doing at least some sort of automated builds with continuous integration. You have continuous integration servers, building your code on specific schedules, not as often as you would like, when developers commit changes. The number of projects grow and you add more servers for the new projects, mixing and matching environments for different needs (linux, windows, os x,…)

The problem and the solution

The architecture we use for our Maestro 3 product is composed of one server that handles all the development lifecycle assets. Behind the scenes we use proven open source projects: Apache Continuum for distributed builds, Apache Archiva for repository management, Sonar for reporting, Selenium for multienvironment integration and load testing. And we add Morph mCloud private cloud solution, which is also based on open source projects such as Eucalyptus or Puppet.

We have multiple Continuum build agents doing builds, and multiple Selenium agents for webapp integration testing, as well as several application servers for continuous deployment and testing.

  • limited capacity
    • problem: your hardware is limited, and provision and setup of new servers requires a considerable amount of time
    • solution: assets are dynamic, you can spin off new virtual machines when you need them, and shuffle infrastructure around in a matter of minutes with just few clicks from the web interface. The hybrid cloud approach means you can start new servers in a public cloud, Amazon EC2, if you really need to
  • capacity utilization
    • problem: you need to setup less active projects in the same server as more active ones to make sure servers are not under/over-utilized
    • solution: infrastructure is shared across all projects. If a project needs it more often than another then it’s there to be used
  • scheduling conflicts
    • problem: at specific times, i.e. releases, you need to stop automatic jobs to ensure resources are available for those builds
    • solution: a smart queue management can differentiate between different build types (ie. continuous builds, release builds) and prioritize
  • location dependence
    • problem: you need to manage the infrastructure, knowing where each server is and what is it building
    • solution: a central view of all the development assets for easier management: build agents, test agents or application servers
  • continuous growing
    • problem: new projects are being added while you are trying to manage and optimize your current setup
    • solution: because infrastructure is shared adding new projects is just a matter of scaling wide the cloud, without assigning infrastructure to specific projects
  • complexity in process
    • problem: multiply that for the number of different stages in your promotion process: development environment, QA, Staging, Production
    • solution: you can keep independent networks in the cloud while sharing resources like virtual machine templates for your stack for instance
  • long time-to-market
    • problem: transition from development to QA to production is a pain point because releases and promotion is not automated
    • solution: compositions (workflows) allow to design and automate the steps from development to production, including manual approval
  • complexity in organization:
    • problem: in large organizations, multiply that for the number of separate entities, departments or groups that have their own separate structure
    • solution: enabling self provisioning you can assign quotas to developers or groups to start servers as they need them in a matter of minutes from prebuilt templates

Why a private cloud?

  • cost effectiveness: development infrastructure is running continuously. Global development teams make use of it 24×7
  • bandwidth usage: the traffic between your source control system and the cloud can be expensive, because it’s continuously getting the code for building
  • security restrictions: most companies don’t like their code being exported anywhere outside their firewall. Companies that need to comply with regulations (ie. PCI) have strong requirements on external networks
  • performance: in a private cloud you can optimize the hardware for your specific scenario, reducing the number of VMs needed for the same job compared to public cloud providers
  • heterogeneous environments: if you need to develop in different environments then there are chances that the public cloud service won’t be able to provide them

The new challenges

  • parallelism, you need to know the dependencies between components to know what needs to be built before what
  • stickyness, or how to take advantage of the state of the agents to start builds in the same ones if possible, ie. agents that built a project before can do a source control update instead of a checkout, or have the dependencies already in the filesystem
  • asset management, when you have an increasing number of services running, stoping and starting as needed, you need to know what’s running and where, not only at hardware level but at service level: build agents, test agents and deployment servers.

The new vision

  • you can improve continuous integration as developers checkin code because the barrier to add new infrastructure is minimal, given you have enough hardware in your cloud or if you use external cloud services, which means reduced time to find problems
  • developers have access to infrastructure they need to do their jobs, for instance start an exact copy of the production environment to fix an issue by using a cloud template that they can get up and running in minutes and tear down at the end, not incurring in high hardware costs
  • less friction and easier interaction with IT people as developers can self provision infrastructure, if necessary shuffling virtual machines that they no longer need for the ones they needed

By leveraging the cloud you can solve existing problems in your development lifecycle and at the same time you will be able to do things that you would not even consider because the technology made it complicated or impossible to do. Definitely something worth checking out for large development teams.

Maestro 3 is going to be released this week at InterOp New York (come over and say hi if you are around) but we are already demoing the development snapshots to clients and at conferences like JavaOne.