Installing Puppet 3 in a BeagleBone or Raspberry Pi

Puppet Labs logoI am a proud owner of a NinjaBlocks device that I use to control my home (blinds, hot water, heater, presence detection,…), welcome to the Internet of Things! But that’s a story for next posts, the important thing is that the device is actually a BeagleBone board running Ubuntu connected to an Arduino board.

raspberry logo

I thought, what would better than managing the NinjaBlock ubuntu with Puppet? there are a number of files and services I added there and it’d be nice to have puppet installed. But the official Ubuntu packages only offer puppet 2.7.11, so I installed the PuppetLabs repository package and tried to install Puppet 3, failing miserably because there is no Facter build for armhf platform,  the same one used in Raspbian.


wget https://apt.puppetlabs.com/puppetlabs-release-precise.deb
sudo dpkg -i puppetlabs-release-precise.deb
sudo apt-get update
sudo apt-get install puppet

and got this

Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
puppet : Depends: puppet-common (= 3.4.2-1puppetlabs1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

When I tried to install all the required packages


sudo apt-get install puppet puppet-common facter

then I found the actual error

The following packages have unmet dependencies:
facter : Depends: dmidecode but it is not installable
E: Unable to correct problems, you have held broken packages.

Investigating a bit I found that facter has a dependency on dmidecode that should be optional, as dmidecode is not available for arm platform and is not really needed for facter.

The solution? Rebuild the facter package removing that dependency, easily done with this script. When vi opens just delete the dmidecode dependency and you will get the fixed facter_1.6.18-1puppetlabs1_all.modified.deb

curl -O http://apt.puppetlabs.com/pool/precise/main/f/facter/facter_1.6.18-1puppetlabs1_all.deb
curl -O curl -O https://gist.github.com/carlossg/8578202/raw/70689b1b74517cc4b0743e54d84dae8375503159/videbcontrol.sh
bash videbcontrol.sh facter_1.6.18-1puppetlabs1_all.deb
# remove dependency to dmidecode in the editor that opens
sudo dpkg -i facter_1.6.18-1puppetlabs1_all.modified.deb
# If you want to use ruby 1.9 instead of 1.8
sudo apt-get install libaugeas-ruby1.9.1 ruby1.9.1

# puppet may not install if all the dependencies are not listed
sudo apt-get install puppet puppet-common
# mark dependencies as automatically installed so they are removed when removing puppet
sudo apt-mark auto facter libaugeas-ruby1.9.1 puppet-common

Installing RVM and multiple ruby versions with Puppet

rvm_logoWith the latest version of the Puppet RVM module it is even easier to install multiple versions of Ruby


# install rubies from binaries
Rvm_system_ruby {
  ensure     => present,
  build_opts => ['--binary'],
}

# ensure rvm doesn't timeout finding binary rubies
# the umask line is the default content when installing rvm if file does not exist
file { '/etc/rvmrc':
  content => 'umask u=rwx,g=rwx,o=rx
                     export rvm_max_time_flag=20',
  mode    => '0664',
  before  => Class['rvm'],
}

class { 'rvm': }
rvm::system_user { 'vagrant': }
rvm_system_ruby {
  'ruby-1.9.3':
    default_use => true;
  'ruby-2.0.0':;
  'jruby':;
}

Hiera can also be used to define what rubies to install, making the Puppet code even less verbose

...
class { 'rvm': }
# rvm::system_user no longer needed
# rvm_system_ruby no longer needed

The equivalent hiera yaml configuration to the previous example

rvm::system_rubies:
  '1.9':
    default_use: true
  '2.0': {}
  'jruby-1.7': {}

rvm::system_users:
  - vagrant

Continuous Delivery with Maven, Puppet and Tomcat – Video from ApacheCon NA 2013

Apachecon NA 2013A little bit late but finally the video from my session at ApacheCon Portland is available. That was the first version of the talk that I just gave at Agile testing Days which unfortunately was not recorded.

Description
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.

Abstract
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,… in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.

Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,… and easily make changes that propagate across all nodes.

Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.

We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.

Infrastructure testing with Jenkins, Puppet and Vagrant at Agile Testing Days

agiletdThis week I’m in Postdam/Berlin giving a talk Infrastructure testing with Jenkins, Puppet and Vagrant at Agile Testing Days. Showing examples of using Puppet, Vagrant and other tools to implement a source code to production continuous delivery cycle.

Slides are up in SlideShare, and source code is available at GitHub.

Extend Continuous Integration to automatically test your infrastructure.

Continuous Integration can be extended to test deployments and production environments, in a Continuous Delivery cycle, using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations, and test the infrastructure the same way continuous integration tools do with developers’ code.

Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services, … in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.

Using Vagrant, a command line automation layer for VirtualBox, we can easily spin off virtual machines with the same configuration as production servers, run our test suite, and tear them down afterwards.

We will show how to set up automated testing of an application and associated infrastructure and configurations, creating on demand virtual machines for testing, as part of your continuous integration process.

Testing Puppet and Hiera

Puppet Labs logoAt MaestroDev we have been using Puppet 3 for a quite some time now, and one of the main reasons to upgrade from Puppet 2.x was the ability of using Hiera as a data backend for all the variables that customize the different vms. We don’t have a lot of machines but pretty much all of them have some difference so Hiera allows us to have the same manifests and modules apply to all the machines by just using different parameters in each server.

But testing Hiera is not that simple. With rspec-puppet you can test each class by passing parameters, but how can you test a class that calls another class and so on, and at some point there you need to inject a parameter?

Well, this is possible with the hiera-puppet-helper gem you can stub the hiera data backend

source 'https://rubygems.org'

group :rake do
gem 'puppet'
gem 'rspec-puppet'
gem 'hiera-puppet-helper'
gem 'rake'
gem 'puppetlabs_spec_helper'
end
require 'puppetlabs_spec_helper/module_spec_helper'
require 'hiera-puppet-helper/rspec'
require 'hiera'
require 'puppet/indirector/hiera'

# config hiera to work with let(:hiera_data)
def hiera_stub
  config = Hiera::Config.load(hiera_config)
  config[:logger] = 'puppet'
  Hiera.new(:config => config)
end

RSpec.configure do |c|
  c.mock_framework = :rspec

  c.before(:each) do
    Puppet::Indirector::Hiera.stub(:hiera => hiera_stub)
  end

end

And then you can use let(:hiera_data) to inject any parameters automatically into the puppet classes from your rspec tests.

require 'spec_helper'

describe 'mymodule::myclass' do

let(:hiera_data) {{
  'mymodule::myclass::myparam' => 'myvalue'
}}

it { should contain_class('mymodule::myclass').with_myparam('myvalue') }

Check out a full module using hiera-puppet-helper at maestro_nodes.

PuppetConf video: How to Develop Puppet Modules

How to Develop Puppet Modules. From Source to the Forge With Zero Clicks (slides)

Puppet Modules are a great way to reuse code, share your development with other people and take advantage of the hundreds of modules already available in the community. But how to create, test and publish them as easily as possible? now that infrastructure is defined as code, we need to use development best practices to build, test, deploy and use Puppet modules themselves.

Three steps for a fully automated process

  • Continuous Integration of Puppet Modules
  • Automatic release and upload to the Puppet Forge
  • Deploy to Puppet master

More about PuppetConf in my previous entry.

PuppetConf recap: How to Develop Puppet Modules. From Source to the Forge With Zero Clicks

Puppet Labs logoLast week I was at PuppetConf in San Francisco, speaking about automating the build-test-release-deploy lifecycle for Puppet modules and how we use them. Check out the PuppetLabs blog The Puppet Forge: Modules From Design to Deployment  and see below the original post at MaestroDev’s blog and some (quite unrelated) pictures I took. More conference pictures in the official Flickr account.

Last week PuppetConf took place at the Fairmont in San Francisco, gathering Puppet users and enthusiasts from all over the world for five days of training, development and sessions.

MaestroDev was present at the event as we are heavy Puppet users, and contributors! We are currently the 3rd most frequent contributor to Puppet modules in the Puppet Forge and publish 30 modules, of the 50 that we use on a day to day basis.

Our architect Carlos Sanchez gave a presentation on How to Develop Puppet Modules: From Source to the Forge With Zero Clicks, showing how to apply an automatic build-test-release-deploy cycle for Puppet Modules, because, when you use Infrastructure-as-Code you have to apply the development best practices to your infrastructure. In a demo, he showed how we use Maestro to automatically build, test, release and deploy our Puppet modules to the Puppet Forge, triggered on each commit for a truly Continuous Delivery experience.


And, not only do we use Puppet to provide module deployment to the Forge, we also integrate Maestro with Puppet and Puppet Enterprise to automate Puppet updates across your Puppet agents.  When you put these capabilities together with other development automation capabilities you achieve Continuous Delivery for both your applications and infrastructure.

Imagine you are building or releasing the latest version of your software and need to propagate an update through all the Puppet agents running, as well as any updates to config files or packages managed by Puppet. Instead of waiting for the next Puppet run to happen, Maestro can automatically deploy the Puppet manifests and modules to the Puppet master after being tested, and propagate those changes as well as the latest application built to all or a subset of the servers running with Puppet agents.

Thanks to all the Puppet Labs guys for a great event, and especially to Ryan Coleman for the work he is doing on the Forge and his help getting these Maestro integrations working.  We look forward to seeing everyone at the next event!