Apache Barcamp Spain 2011: a summary

Last Saturday the Apache Barcamp Spain took place in Seville. It was the first Apache Software Foundation event in Spain, ever.

The idea started almost a year ago, chatting over at the ApacheCON with the ASF folks that were at the time organizing the Apache Barcamp Sydney and Oxford. I liked the idea and with some hand-waving and crazy ideas was able to convince the other organizers, that you all should know: the local Klicap guys Manuel Recena & Antonio Muñiz that had to handle all the local organization, David Bonilla, who had to jump from plane to plane to get from SF to Seville in two days, and Abel Muiño, that couldn’t attend because he’s having a baby, congrats!. Without them, this event could never, ever happen!

The pre-event

A bit stressful, not what I had in mind for a self organizing barcamp. Thanks to our sponsors we had budget to offer breakfast, coffee & drinks after lunch, t-shirts, streaming, and a party afterwards, so we had to organize all of that plus the usual bits and pieces about venue, wi-fi, getting the word out,…

The event

A full day with 3 tracks, 18 sessions, Barcamp style. 130 people signed up, with tickets lasting less than 5 hours. We asked everybody to confirm attendance one week before the event so we could free up some room for people in the waiting list, but still some people didn’t show up 😦 (the problem with free events) which was compensated by other people showing up without registration that we gladly accepted.

It was great to see many people coming from places all over Spain, considering that they didn’t know what the sessions would be about, flying from the Canary islands, Barcelona, Galicia,… that definitely sets the bar high for the content of the event.

Celebrity t-shirt

The attendees received a free CELEBRITY t-shirt (no kidding) with room to write the name, instead of the usual boring stickers, to encourage people to wear it, which most of them did, plus a Pokemon card (more about that later).

After the initial event introduction, all those attendees that wanted to give a talk came into stage, and we had volunteers not only to fill the 18 sessions, but 27 session proposals! which was a great ratio speakers/attendees.

Votación democrática by Ana Buigues

So everybody had to vote, and we got down to the final 18 sessions. I’d like to thank everybody that suggested a talk even if it wasn’t voted, don’t let it bring you down and try again in other conferences.

Note that if you buy what looks like Post-Its, make sure they stick and are not only colored papers! We had to work around the issue by voting on a table instead of the whiteboards.

There were sessions about CSS, Apache Droids, Apache Maven, Apache Hadoop, Apache James, Apache ServiceMix, Play framework, Python, cloud, GIS, DevOps,…

We had recording/streaming (not without its issues) working in two of the tracks too, the videos will be uploaded to the website soon.

Marea Azul by Aroshni

Marea Azul by Aroshni

Unfortunately, organizing and speaking (I gave a talk about DevOps which I’ll post about soon) didn’t leave much time to do networking, just a bit during the coffee pauses and lunch. And I’d like to have caught up with many of the people that were around, sorry I was running around most of the time, I’ll see you at the next event with more time!

After the sessions we had four lightning talks, which I believe most people were new to the format, but was entertaining, I liked particularly the always funny (if you can get his German-southern Spanish accent) and ASF member Thorsten Scherler, who just got into twitter this week!

The final act was the Pokemon ceremony, which we just made up during the day 🙂 All the speakers went up to the stage, and everybody had to give their Pokemon card to the talk that they liked most, which was quite a fun time. We had an Android tablet ready for the winner, Nacho Coloma, and Amazon gift cards for the 2nd and 3rd, our little way to encourage people to speak and spread their knowledge. We also gave away a gift card to a random twit that used the #barcampes hashtag.

The Party

Flamenco at La Carboneria

Flamenco at La Carboneria

From there on, it was beer time. We went to a typical Flamenco bar in the old part of Seville, La Carboneria. On our way there it was fun to find other attendees wearing the blue t-shirts, which made locals refer to us as the smurf tide.

At the bar we had waiting for us free beers, a private area with tons of food, serrano ham, cheese, tortilla,… very typical Spanish, and three Flamenco shows during the night, plus a patio where most people gathered to talk.

After we closed the place at 3am we moved on to other places in the city, but that’s a different story…

The numbers

  • 130 attendees
  • 27 suggested talks
  • 18 sessions
  • 3 Flamenco shows
  • 37 tortillas
  • 37 serrano ham & chees plates
  • 44 olive plates
  • 945 beers (310 liters or almost 3 liters per person!)

The sponsors

Thanks to all the sponsors (Klicap, Extrema Sistemas, Atlassian, Deiser, Escuela de Groovy, Tropo, Autentia) for their collaboration and help to make this event so successful!

Photos

StartingLa carboneriaLa Carboneria se esta calentandoBarcamp partyFlamenco at the Apache Barcamp partyFlamenco at the Apache Barcamp Spain

More photos in Flickr

Tagged photos in Flickr

Migrating from VMWare Fusion to VirtualBox: networking

Virtualbox logoIn my previous post on migrating Windows 7 to VirtualBox there were a couple things missing on the networking configuration to make VirtualBox work the same way VMWare does by default.

In VMWare you can just connect to the ip of the guest, the host is by default configured with new virtual network adapters that will route the packages to the guest.

In VirtualBox however by default guests are configured with NAT to allow internet access from the guest through the host, but doesn’t allow connecting from the host to the guest directly, only assigning each port in the host that you want redirected to the guest (which is a PITA).

There are two options to achieve the same result that you had in VMWare:

  1. Change the Network Adapter from NAT to Bridged Adapter. The drawback is that it needs to bridge to a specific hardware adapter (ethernet or airport), so it won’t work if you switch connections.
  2. Add a new Host-Only Network Adapter. First, under VirtualBox global preferences – Network, add a Host-Only network. That will create a virtual interface vboxnet0 in your host machine, and you can customize the ip ranges and DHCP server. Then in the VM settings, under Network, enable Adapter 2 attached to Host-Only adapter and choose the virtual adapter, vboxnet0.

I chose option 2 and works as expected, I just need to connect to 192.168.56.101 (the default assigned ip for the first VM) to get to the host.

Issues

The Windows 7 guest gets the ip from DHCP but for some reason it does not get the default gateway. Windows won’t let you assign the network to a Home or Work group without a default gateway, and therefore Windows Firewall will block incoming connections. The default gateway needs to be added by hand to the tcp/ip network configuration by hand (ie. 192.168.56.100) and then you can assign the network to Home/Work and the firewall will allow traffic.

Migrating Windows 7 from VMWare Fusion to VirtualBox

Virtualbox logoI recently had to start using Windows to connect to a client’s VPN network. Their VPN solution only works (or is supported) on Windows, plus the need to test a few things on Internet Explorer 😦

Previously I had used Amazon EC2 whenever I needed to use windows, to avoid license costs, 20GB of my drive wasted and the CPU/RAM overhead, but this time there was no way out, didn’t seem cool to store the VPN credentials on a public cloud instance, although it’s probably as safe. So I bought Windows 7 online and downloaded the 3GB iso image and VMWare Fusion for OS X which happens to have a 30-day trial. Got Win 7 working there with no problems, which is not what I can say about the VPN setup.

Then I got good recommendations to try VirtualBox, which is both open source and free as in beer, and was glad to see there are ways to easily move your VMWare images to VirtualBox.

Step by step instructions

To move the Windows 7 image to VirtualBox 4.1 just needed to

  1. Uninstall the VMWare tools and shutdown windows
  2. Copy the disk files from VMWare image to a new folder
    1. In Documents/Virtual Machines right click on the image, Show package contents
    2. Copy all *.vmdk files to a new folder
  3. Create a new VirtualBox machine with the same characteristics
    1. Make sure you choose Windows 7 64 bits if that’s what you used
    2. On Virtual Hard Disk, choose the main vmdk file you copied in previous step (although you’ll need to change the default storage config later)
    3. Customize created VM settings:
      1. General: Windows 7 64 bits
      2. System/Motherboard: set the same amount of base memory as the VMWare one
      3. System/Motherboard: Enable IO APIC
      4. Storage: By default VirtualBox adds it to SATA controller but you have to remove the SATA controller, and use IDE PIIX4 without host I/O cache, attaching vmdk to primary master IDE, leave CD/DVD drive
  4. Boot the VirtualBox VM and follow any prompt to restart Windows to install new devices
  5. On the VirtualBox Devices menu, click on install guest additions

Troubleshooting

Several things went wrong before I got it working, so just in case you have the same problems

  • Stuck in “Windows is loading files” black screen, rebooting continuously: make sure you have selected 64 bits Windows and enabled IO APIC
  • Blue screen of death: Remove SATA, SCSI and any other non-IDE controllers from Storage, attach vmdk to IDE and make sure PIIX4 is selected
  • Windows needs to be repaired / Cannot repair windows: same as previous one, make sure disk is attached to IDE

Apache Barcamp Spain

October 8th, that’s the date for the first Apache Barcamp Spain, and the place, after Oxford, Sydney and the ApacheCON barcamps, next stop is Seville!

Friday Evening

For those arriving on Friday there will be for sure time for tapas & drinks. We’ll be updating the website as dates get closer.

Saturday

On Saturday we’ll get together and plan the sessions for the day, pure Barcamp style, and FREE as in free beer.

  • YOU decide what talks are given, in an open format
  • Meet developer stars that have already confirmed their attendance
  • No commercial pitches
  • Networking, networking, networking
  • If you want to present/share something this is your chance, just be convincing and get enough votes to get a time slot in one of the tracks

Do you need more excuses to spend a weekend on southern Spain? Keep reading then…

Saturday Evening

You’ll get introduced to the Spanish fiesta by locals, you don’t wanna miss this one, as what happens in Spain… well you won’t remember when you wake up on Sunday anyway 🙂

More info

Working in a distributed team

Reading this post from James Governor No Need to Commute, Ever, that he started from a job offer from Genuitec on twitter, I felt like writing a bit about my experience working on distributed teams.

@genuitec No need to commute, ever. See more of your family, work with talented people: Genuitec is growing, developers apply today

That is exactly the Open Source community model. Back 7 years ago or so, it worked pretty well for me on the OSS projects I participated in. When later I joined Mergere, where we provided services on top of Apache Maven, it was pretty clear that if we wanted the best people we’d need to hire them wherever they where, and the advantage with OSS contributors is that there’s no need for a resume, you can see exactly how their work is. So there was people in the team working in Los Angeles, Sydney, Paris, Florida, Philippines,… a bit painful to get everybody at the same time if we wanted to, as we were all across the world, but that also ensured that the number of meetings and their length are reduced as everybody makes an effort to have offline communication in their best interest.

There’s a big difference between working remotely and a distributed team though. When some of the team members work remotely but a number of them don’t, you have an issue, there are gonna be interactions that happen in person that are not gonna be in the mailing lists, issue tracker, irc,… but when the whole team is distributed (or most of it anyway) all the communication will flow through the same channels, with the added plus that everything is documented and you can go back to previous conversations.

Working at the GooglePlex is nice, sure, but imagine what you can do with all those hours commuting to work, the ability to work from anywhere, eat at home, see your family more often,… does free food make up for that?

And what’s the advantage for the companies? You can reduce expenses, but more importantly the ability to offer employees something they won’t get in most of other companies, who wouldn’t prefer working remotely than having to go to an office? And access to great people over the world instead of a specific area.

Just make sure you take some of the cost savings and use them to get the whole team together as soon as possible, and for a few days from time to time. Getting beers together does help human interactions 🙂

Introduction to Amazon Web Services Identity and Access Management

Using AWS Identity and Access Management you can create separate users and permissions to use any AWS service, for instance EC2, and avoid giving other people your Amazon username, password or private key.

You can set very granular permissions, on users, groups, specific resources, and a combination of them. It will become really complex soon! But there are several very common use cases, that IAM is useful for. For instance having a AWS account for a team of developers.

Getting started

You can go through the Getting Started Guide, but I’ll save you some time:

Download IAM command line tools

Store your AWS credentials in a file, ie. ~/account-key

AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE
AWSSecretKey=wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY

Configure environment variables

export AWS_IAM_HOME=<path_to_cli>
export PATH=$AWS_IAM_HOME/bin:$PATH
export AWS_CREDENTIAL_FILE=~/account-key

Creating an admin group

When you have IAM setup, the next step is to create an Admins group where you can add yourself

iam-groupcreate -g Admins

Create a policy in a file, ie. MyPolicy.txt

{
   "Statement":[{
      "Effect":"Allow",
      "Action":"*",
      "Resource":"*"
      }
   ]
}

Upload the policy

iam-groupuploadpolicy -g Admins -p AdminsGroupPolicy -f MyPolicy.txt

Creating an admin user

Create an admin user with

iam-usercreate -u YOUR_NAME -g Admins -k -v

The response looks similar to this:

AKIAIOSFODNN7EXAMPLE
wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY
arn:aws:iam::123456789012:user/YOUR_NAME
AIDACKCEVSQ6C2EXAMPLE

The first line is your Access Key ID; the second line is your Secret Access Key. You need to save these IDs.

Save your Access Key ID and your Secret Access Key to a file called for instance ~/YOUR_NAME_cred.txt. You can use those credentials from now on instead of the global AWS credentials for the whole account.

export AWS_CREDENTIAL_FILE=~/YOUR_NAME_cred.txt

Creating a dev group

Let’s create an example dev group where the users will have only read access to EC2 operations.

 iam-groupcreate -g dev

Now we need to set the group policy to allow all EC2 Describe* actions, which are the ones that allow users to see data, but not to change it. Create a file MyPolicy.txt with these contents

{
  "Statement": [
     {
       "Sid": "EC2AllowDescribe",
       "Action": [
         "ec2:Describe*"
       ],
       "Effect": "Allow",
       "Resource": "*"
     }
   ]
 }

Now upload the policy

iam-groupuploadpolicy -g dev -p devGroupPolicy -f MyPolicy.txt

Creating dev users

To create a new AWS user under the dev group

iam-usercreate -u username -g dev -k -v

Create a login profile for the user to log into the web console

iam-useraddloginprofile -u username -p password

The user can now access the AWS console at

https://your_AWS_Account_ID.signin.aws.amazon.com/console/ec2

Or you can make life easier by creating an alias

 iam-accountaliascreate -a maestrodev

and now the console is available at

https://maestrodev.signin.aws.amazon.com/console/ec2

About Policies

AWS policy files can be really complex. The AWS Policy Generator will help as a start point and see what actions can be used, but it won’t help you making them easier to read (using wildcards) or applying them to specific resources. Amazon could have provided a better generator tool allowing you to choose your own resources (users, groups, S3 buckets,…) from a easy to use interface and not having to lookup all sorts of crazy AWS identifiers. Hopefully they will be provide a comprehensive tool as part of the AWS Console.

There is more information available at the IAM User Guide.

Update

Just after I wrote this post Amazon has made IAM available in the AWS management console, which makes using IAM way easier.

Finding duplicate classes in your WAR files with Tattletale

Have you ever found all sorts of weird errors when running your webapp because several jar files included have the same classes in different versions and the wrong one is being picked up by the application server?

Using JBoss Tattletale tool and its Tattletale Maven plugin you can easily find out if you have duplicated classes in your WAR WEB-INF/lib folder and most importantly fail the build automatically if that’s the case before it’s too late and you get bitten in production.

Just add the following plugin configuration to your WAR pom build/plugins section. It can also be used for EAR, assemblies and other types of projects.

<plugin>
  <groupId>org.jboss.tattletale</groupId>
  <artifactId>tattletale-maven</artifactId>
  <version>1.1.0.Final</version>
  <executions>
    <execution>
      <phase>verify</phase> <!-- needs to run after WAR package has been built -->
      <goals>
        <goal>report</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <source>${project.build.directory}/${project.build.finalName}/WEB-INF/lib</source>
    <destination>${project.reporting.outputDirectory}/tattletale</destination>
    <reports>
      <report>jar</report>
      <report>multiplejars</report>
    </reports>
    <profiles>
      <profile>java6</profile>
    </profiles>
    <failOnWarn>true</failOnWarn>
    <!-- excluding some jars, if jar name contains any of these strings it won't be analyzed -->
    <excludes>
      <exclude>persistence-api-</exclude>
      <exclude>xmldsig-</exclude>
    </excludes>
  </configuration>
</plugin>

You will need to add the JBoss Maven repository to your POM repositories section, or to your repository manager. Make sure you use the repository that only contains JBoss artifacts or you may experience conflicts between artifacts in that repo and the Maven Central repo.

Adding extra repositories is a common source of problems and makes builds longer (all repos are queried for artifacts). What I do is add an Apache Archiva proxy connector with a whitelist entry for org/jboss/** so the repo is only queried for org.jboss.* groupIds.

<repository>
  <id>jboss</id>
  <url>https://repository.jboss.org/nexus/content/repositories/releases</url>
  <releases>
    <enabled>true</enabled>
  </releases>
  <snapshots>
    <enabled>false</enabled>
  </snapshots>
</repository>

New challenges from DevOps: development cycle for your infrastructure

One of the main ideas behind DevOps adoption is the concept of  “infrastructure as code”. Tools like Puppet or Chef allow you to programmatically define your infrastructure, the provisioning of your servers: what packages are installed, what is the content of files,…

If server provisioning is a key point in operations, then code management becomes key too once you start coding your servers. You need source control for your infrastructure, you need tags, versioning, dependencies between components,… You need development, testing, QA, release,… for your infrastructure!

Imagine a environment where you have some server stack running in production using Puppet, with a manifest that defines packages and files in that server, and many servers running the same configuration. That Puppet definition must be in source control.

Now a security fix or new version of package must be installed in all the servers, do you just want to change the manifest and push it out to all the running servers? doesn’t sound like a great idea, does it? Hey, we have been tuning development best practices over the years for use cases just like this one.

What you want to do is create a new branch where you can do that change, and test it in some server that is not in production, let’s call it development environment, original isn’t it?
The change works as expected, your app still works, great! now you can probably merge that branch of the Puppet manifests into trunk, with possibly other changes made by other people, that at some point you will want to test together, in a production-like environment, maybe with several servers in a cluster, load balancing, etc… and very importantly, with the next version of the application that is going to be deployed. You create a new tag and version to be able to identify it later and deploy to that environment, let’s call it QA or staging.

What all this cycle allows you to do is clearly define what is running in each environment, using versions, and easily find issues between deployments, using source control, being able to roll back to known working configuration if needed.

After all, if you deal with infrastructure as code you should use code development best practices, and you’ll get the same benefits.

Javagruppen 2011: Build and test in the cloud slides

Last week spent some good days in Denmark for Javagruppen annual conference as I mentioned in a previous post. It’s a small conference that allows you to cover any question that the attendees have and be able to select what you talk about based on their specific interests.

I talked about creating an Apache Continuum + Selenium grid on EC2 for massively multi-environment and parallel build and test. You can find the slides below, although it’s mostly a talk/visual presentation.

The location was great, in a hotel with spa in Jutland and very nice people and the other speakers too. My advice, go to Denmark, but try to do it in summer 🙂 I’m sure it makes a difference – although it’s pretty cool to be on a hot tub outside at 0C (32F)

And you can find some trip pictures in flickr.

Nyhavn panorama

Nyhavn panorama

GPG, Maven and OS X

GPG on the Mac has been quite an issue always. Several choices to install and hard to configure. Now seems that GPG native tools for OS X are back to life at the GPGTools project, providing a single easy to use installer.

GPGTools is an open source initiative to bring OpenPGP to Apple OS X in the form of a single installer package

So I installed the package, logged out and in again for the PATHs to take effect, and got the agent up and running by executing

gpg-agent --daemon

(It will be automatically started when you restart)

Now, to configure Maven to use this GPG2 version and the GPG agent I added a profile to my ~/.m2/settings.xml

    <profile>
      <id>gpg</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      <properties>
        <gpg.useagent>true</gpg.useagent>
        <gpg.executable>gpg2</gpg.executable>
      </properties>
    </profile>

This way the agent only prompts for the GPG key password once for each session, and Maven uses the right gpg executable.