Github forking after cloning

When you want to change/contribute to other people’s projects that you don’t have access to you usually fork the project, and then use your read+write fork.

What if you first cloned their repo and made local commits that you now want to contribute? You don’t want to mess with patches, so here’s what I did to contribute a small fix to the example project from Apache Maven 2 Effective Implementation, a great book by Brett Porter and Maria Odea Ching.

  1. Fork the project repo at Github (at https://github.com/brettporter/centrepoint/)
  2. In my local clone, I renamed the remote origin to upstream
  3. Add a new remote called origin pointing to the read+write fork
  4. Change the master branch remote to origin instead of upstream
  5. Fetch the remote and push your changes
git remote rename origin upstream
git remote add origin [email protected]:carlossg/centrepoint.git
git fetch origin
git branch --set-upstream master origin/master
git push origin

Japan

Spending a few days in Japan, Kyoto, Tokyo and surroundings. So far really impressive, specially the autumn colors of the trees.
Kyoto
Kyoto
Kyoto
More photos

InterOp New York and ApacheCON Atlanta

One week ago I was at InterOp New York, where we announced the release of Maestro 3, and talked to people attending the conference and analysts on the product and our ideas behind it, and we got some coverage on Maestro 3, including a video interview with InformationWeek, which covers some of the ideas behind the product, not all of them given the format and duration of the interview, but I thought it was worth posting it anyway.

We are doing webinars this Wednesday 3rd and 10th, showcasing how it may help organizations improve their build-test-release-deploy process, so if you are interested you can just check the times and sign up on our Build Through Deploy in a Single Interface – Introduction to Maestro 3 page.

Tomorrow I’ll be in Atlanta for ApacheCON, along with Brett Porter that is doing a training on Maven today. I’ll be at the BarCamp on Tuesday and the Maven Meetup on Wednesday for sure. BTW BarCamp and Meetups are free for everybody, so if you are in the Atlanta area you can just come by even if you don’t attend the show.

At Interop NYC, Maestro Dev showcased its latest Maestro 3 app dev choreography environment — a cloud-based solution that stitches together the discreet steps (build, test, deploy, etc.) of application development, using existing tools for each step

Vodpod videos no longer available.

Cloud Computing opportunities in the development (build-test-deploy) space

You heard the word “cloud” everywhere, running applications on the cloud, scaling with the cloud,… but not so often from the development lifecycle perspective: code, commit, test, deploy to QA, release, etc but it brings fundamental changes to this aspect too.

The scenario

If you belong to, or manage, a group of developers, you are doing at least some sort of automated builds with continuous integration. You have continuous integration servers, building your code on specific schedules, not as often as you would like, when developers commit changes. The number of projects grow and you add more servers for the new projects, mixing and matching environments for different needs (linux, windows, os x,…)

The problem and the solution

The architecture we use for our Maestro 3 product is composed of one server that handles all the development lifecycle assets. Behind the scenes we use proven open source projects: Apache Continuum for distributed builds, Apache Archiva for repository management, Sonar for reporting, Selenium for multienvironment integration and load testing. And we add Morph mCloud private cloud solution, which is also based on open source projects such as Eucalyptus or Puppet.

We have multiple Continuum build agents doing builds, and multiple Selenium agents for webapp integration testing, as well as several application servers for continuous deployment and testing.

  • limited capacity
    • problem: your hardware is limited, and provision and setup of new servers requires a considerable amount of time
    • solution: assets are dynamic, you can spin off new virtual machines when you need them, and shuffle infrastructure around in a matter of minutes with just few clicks from the web interface. The hybrid cloud approach means you can start new servers in a public cloud, Amazon EC2, if you really need to
  • capacity utilization
    • problem: you need to setup less active projects in the same server as more active ones to make sure servers are not under/over-utilized
    • solution: infrastructure is shared across all projects. If a project needs it more often than another then it’s there to be used
  • scheduling conflicts
    • problem: at specific times, i.e. releases, you need to stop automatic jobs to ensure resources are available for those builds
    • solution: a smart queue management can differentiate between different build types (ie. continuous builds, release builds) and prioritize
  • location dependence
    • problem: you need to manage the infrastructure, knowing where each server is and what is it building
    • solution: a central view of all the development assets for easier management: build agents, test agents or application servers
  • continuous growing
    • problem: new projects are being added while you are trying to manage and optimize your current setup
    • solution: because infrastructure is shared adding new projects is just a matter of scaling wide the cloud, without assigning infrastructure to specific projects
  • complexity in process
    • problem: multiply that for the number of different stages in your promotion process: development environment, QA, Staging, Production
    • solution: you can keep independent networks in the cloud while sharing resources like virtual machine templates for your stack for instance
  • long time-to-market
    • problem: transition from development to QA to production is a pain point because releases and promotion is not automated
    • solution: compositions (workflows) allow to design and automate the steps from development to production, including manual approval
  • complexity in organization:
    • problem: in large organizations, multiply that for the number of separate entities, departments or groups that have their own separate structure
    • solution: enabling self provisioning you can assign quotas to developers or groups to start servers as they need them in a matter of minutes from prebuilt templates

Why a private cloud?

  • cost effectiveness: development infrastructure is running continuously. Global development teams make use of it 24×7
  • bandwidth usage: the traffic between your source control system and the cloud can be expensive, because it’s continuously getting the code for building
  • security restrictions: most companies don’t like their code being exported anywhere outside their firewall. Companies that need to comply with regulations (ie. PCI) have strong requirements on external networks
  • performance: in a private cloud you can optimize the hardware for your specific scenario, reducing the number of VMs needed for the same job compared to public cloud providers
  • heterogeneous environments: if you need to develop in different environments then there are chances that the public cloud service won’t be able to provide them

The new challenges

  • parallelism, you need to know the dependencies between components to know what needs to be built before what
  • stickyness, or how to take advantage of the state of the agents to start builds in the same ones if possible, ie. agents that built a project before can do a source control update instead of a checkout, or have the dependencies already in the filesystem
  • asset management, when you have an increasing number of services running, stoping and starting as needed, you need to know what’s running and where, not only at hardware level but at service level: build agents, test agents and deployment servers.

The new vision

  • you can improve continuous integration as developers checkin code because the barrier to add new infrastructure is minimal, given you have enough hardware in your cloud or if you use external cloud services, which means reduced time to find problems
  • developers have access to infrastructure they need to do their jobs, for instance start an exact copy of the production environment to fix an issue by using a cloud template that they can get up and running in minutes and tear down at the end, not incurring in high hardware costs
  • less friction and easier interaction with IT people as developers can self provision infrastructure, if necessary shuffling virtual machines that they no longer need for the ones they needed

By leveraging the cloud you can solve existing problems in your development lifecycle and at the same time you will be able to do things that you would not even consider because the technology made it complicated or impossible to do. Definitely something worth checking out for large development teams.

Maestro 3 is going to be released this week at InterOp New York (come over and say hi if you are around) but we are already demoing the development snapshots to clients and at conferences like JavaOne.

Maven 3.0 released!

Maven 3.0 is finally out after a long long time in progress!

What’s new?

Behind the scenes a LOT has changed, but for a Maven user or plugin developer you shouldn’t see many differences. Particularly, backwards compatibility was a must for this release.

New features include:

  • Better POM validation and warning/error messages. Pay attention at the beginning of the build where you can see notices about your POM configuration.
  • Parallel builds. Use several threads to build multiproject POMs, analyzing the dependencies between modules to determine the ordering.
  • Stability and predictability, changes in classloading, dependency ordering and multiproject building make the build to behave more consistently.

Changes include:

  • No more site plugin as you know it. Configure the new Maven Site plugin, or better, install Sonar (highly recommended).
  • profiles.xml is no longer used
  • Maven 1 repository layouts are no longer supported

Read all the release notes and compatibility notes.

Upgrade!

  1. Download Maven 3.
  2. Check compatibility notes.
  3. Upgrade the plugins to compatible versions if needed.
  4. Configure the new Maven Site plugin, or move to Sonar.

See other notes on Maven 3 from Brett Porter, and if you are going to ApacheCON, he will be giving a training session covering Maven 3.0 too.

Moved the blog to http://blog.carlossanchez.eu

If you are reading this, you DON’T need to change anything. You are already using my feedburner RSS that is automatically updated to the new location. Just update any web links to the new location at http://blog.carlossanchez.eu.

This is my new home, moved from my JRoller weblog to a WordPress.com hosted blog, at http://blog.carlossanchez.eu and the RSS is still at Feedburner’ http://feeds.feedburner.com/carlossanchez.

I decided to move because JRoller doesn’t seem to be maintained anymore, and WordPress has a lot more features. For reference I just followed the migration instructions by theholyjava. Had to fix some code/pre tags and change SlideShare embedded presentations. Then I bought the WordPress.com subdomain mapping for $12/year to have it under a my carlossanchez.eu personal domain.

Happy reading!

Eclipse IAM WTP support, now EARs too

I recently had some time to spend in Eclipse IAM, working on improving the WTP support.

Version 0.11.0 already had good support for WAR projects, including war overlays (which was a bit tricky to implement in Eclipse). Now the last builds of the coming 0.12.0 version have EAR support.

You can import your Maven EAR projects and Eclipse will recognize the Maven-generated application.xml and configure automatically the dependencies to the other WAR projects opened in the workspace, with no extra configuration from you. And from the usual WTP "Run in Server" wizard you can run the EAR project and all associated WAR files in your favorite application server.

You can install the development builds of 0.12 from http://q4e.googlecode.com/svn/trunk/updatesite-dev/ until it’s released, and check the installation instructions for requirements or if you have issues. For help and feedback, we have a newsgroup at Eclipse.

Eclipse IAM 0.11.0, Archiva 1.3, Continuum 1.3.5

This is definitely release week! After Archiva 1.3 and Continuum 1.3.5 beta, I’ve just pushed the new release of Eclipse IAM 0.11.0:

This new version includes most notably

P2 Update site is published at http://q4e.googlecode.com/svn/trunk/updatesite-iam/

Ganymede users (Eclipse 3.4) should make sure they have added all the update sites listed in the installation instructions. If P2 complains about missing dependencies, check the update sites again.

Adopters of the latest and greatest Eclipse Galileo can install from the update site as usual.

If upgrading from Q4E 0.8.1 or earlier, some extra steps must be followed

The list of changes is available on the eclipse wiki.

Note that this is not an official Eclipse IAM release to allow our users to enjoy the progress made until we complete the move to the foundation and clear all the IP issues involving the maven embedder.

Continuum-ruby

continuum-ruby is a Ruby library to interact with Apache Continuum, using the XML-RPC interface and enabling access to the working copy directories. continuum-ruby is now available in the Continuum Sandbox.

More info on the Continuum XML-RPC interface:

Example

continuum = Continuum::Continuum.new("my.continuum.host", 8080, "admin", "password", "/continuum"

# xml-rpc interface
xml_rpc = Continuum::XmlRpc.new(continuum)
ok, result = xml_rpc.build_project(1)
error = Continuum.parse_error(result) if !ok

# getting working copy files
working_copy = Continuum::WorkingCopy.new(continuum)
test_results = working_copy.get(1, "target/surefire-reports", "emailable-report.html")
files = working_copy.dir(1, "target")
files.each do |file|
file_content = working_copy.get(1, "target", file)
end

ApacheCON US 2009 Oakland

I’ll be next week at ApacheCON in Oakland, celebrating the 10th anniversary of the ASF. Unfortunately I’m not speaking this time, but will be hanging around at the BarCamp Apache (Monday and Tuesday), Hackathon (Monday and Tuesday too), and Maven Meetup (Tuesday night), doing the usual socialization and Face-To-Face meetings. These are all free events that you can attend.

Brett Porter is having a Maven training course on Monday, November 2. You still have time to sign up, and plenty of other Apache folks will be around. Leave a comment or ping me if you want to meet at some point.

BTW there’s an interesting new project proposal in the incubator, the Libcloud project, a client library for interacting with many of the popular cloud server providers. Will try to get more details next week too, but sounds promising.