From the developer point of view, there are some tools involved in the source-to-deploy process
- Source control management tools: Subversion, Git, Mercurial, Perforce,…
- Build tools: Maven, Ant, Ivy, Buildr, Graddle, Rake,…
- Continuous Integration tools: Continuum, Jenkins, Hudson, Bamboo,…
- Repository (Artifact) management tools: Archiva, Nexus, Artifactory,…
When everything is set together, we can have a CI schedule that is building automatically the changes from the SCM as they are committed, deploying to an artifact repository the result of the build or sending a notification if there is any error. Everything fully automated. A change is made to SCM, the CI server kicks in, builds and runs all sort of tests (unit, functional, integration,…) while you go off for a sword fight with your coworkers.
Now what? somebody sends by email the tarball, zipfile,… to the operations team? oh, no that would be too bad. Just send them the url to download it… And even better send some instructions, a changelog, upgrade task list,…
What developers do today to specify deployments and target environments is not enough.
Using tools like Maven in the Java world or Bundle in Rubyland you can explicitly list all the dependencies and versions you need. But there are some critical dependencies that are never set.
It is just too simple.
Packages installed, C libraries, databases, all sort of OS and service level configuration,… That’s the next level of dependencies that should be explicitly listed and automated.
For example, think about versions of libc, postgresql, number of connections allowed, ports opened, opened file descriptors limit,…
From the point of view of the operations team the number of requirements is complex: operating system, kernel version, config files, packages installed,…
And then multiply that for several stage configurations that most likely won’t have the exact same configurations.
Deployment of the artifacts produced by the development team is always a challenge
- How do I deploy this?
- Reading the documentation provided by the development team?
- Executing some manual steps?
That is obviously prone to errors
It’s nothing new but it has increased with the proliferation of Cloud based environments, making it easier and easier to run dozens or hundreds of servers at any point in time. Even knowing how to deploy to one server, how is it deployed to all those servers? what connections need to be established between servers? how is it going to affect the network?
Pingback: DevOps: how we got here | Agile | Syngu