All posts in Build

Drama-Free Version Numbers

Categories: Build, Philosophy
Comments: 2

The Wikipedia entry for software versioning is an amusing read, as it is littered with [clarification needed] tags and even a [dubious] tag. The editors that left those tags behind neatly capture my feelings about traditional version numbering schemes – they tend to be unclear, arbitrary, and quite dubious in terms of their usefulness.


In traditional schemes, version numbers are used to indicate the “importance” of a release. So, increasing the version number from 1.0 to 1.1 indicates a “smaller” release than increasing it to 2.0. The problem with this approach is that the relative “size” of a change or “importance” of a release is a totally subjective measure, and it is hard to keep the scale consistent over time. For example, consider the version numbers that have been used for releases of the .NET framework:

  • .NET 1.0
  • .NET 1.1
  • .NET 2.0
  • .NET 3.0
  • .NET 3.5
  • .NET 4.0
  • .NET 4.5
  • .NET 4.5.1

What is the rationale for this sequence of version numbers? New side-by side deployments of the framework were introduced with 1.0, 1.1, 2.0, and 4.0. Versions 3.0 and 3.5 were actually just packs of add-on libraries that didn’t change the 2.0 framework. The most recent versions, 4.5 and 4.5.1, are complete in-place replacements of the 4.0 framework. Given that, I would expect 1.0, 1.1, 2.0, and 4.0 to be “major” versions, and the intermediate releases to be “minor” versions, but that isn’t how Microsoft lined things up. In the end, the .NET version numbers end up conveying no useful information beyond “here are some different versions of the framework.”

Version numbers also have a tendency to attract marketing people – as we’ve seen over the unfortunate history of Windows version “numbers” (I don’t think “XP” and “Vista” qualify as numbers). Luckily, our focus is on proprietary enterprise software development, so shrink-wrap marketing concerns should not apply to us.

So, left to our own devices, what do we want from our version numbers?

Version numbers should follow an objective convention that is not subject to judgment calls.

Time spent deciding whether a release warrants an increment of the major version number or just an increment of the minor version number is completely wasted. Instead, adopt a rote algorithm for assigning version numbers to releases. For example: always increment the minor version number when you cut a new release branch, and increment the major number with every 10th branch. Because you are releasing your software frequently, the concept of a “major” release should be anathema to you anyway – the software changes slowly and incrementally with every release.

Version numbers should monotonically increase as the code line evolves.

If there are two versions of your software in production, 1.0 and 2.0, it should be safe to assume that 2.0 was created from newer code than 1.0. Likewise, version 1.1 must have been built on code newer than 1.0, but older than 2.0 (though 1.1 may have been built after 2.0 if it is based on a patch to 1.0’s release branch). To help ensure that your version numbers always increase with each build, it is helpful to utilize your build system to automatically integrate build counters and/or source control revision numbers into your numbering convention.

Version numbers should convey useful and relevant information.

Ideally, your version numbers should contain enough information to identify the code used to create the binary and/or the build record in your continuous integration system. This allows anyone on the team to easily identify where a particular binary came from. This ability can come in handy when the need arises to validate deployments or determine the progeny of a “lost” application that you forgot was deployed to production.

My favorite version numbering convention.

I will share with you the convention I like to use – I think it does a good job of hitting the requirements outlined above. The convention utilizes all four version numbers we have at our disposal.

  • The first two numbers are used to denote the date on which the release branch was cut from the main line, in YY.MMDD format. Main line builds have 0.0 as the first two numbers, so that main line builds stick out like a sore thumb if they are deployed to production (which they should not be).
  • The third number increments with each build created on the branch. Most continuous integration systems expose this number as a “build id” or “build counter” of some sort. You can think of this as the “patch” number, as bug fixes patched to the release branch will trigger a new build, and will hence trigger this number to increment.
  • The fourth number is the source control revision number of the code used to generate the build. Using this number is only possible in centralized version control systems where there is a sequentially incrementing revision number (sorry, git users). If you can embed this, it provides yet another hook that identifies the code that went into the build.

This convention is objective and simple to implement, provides a monotonically increasing sequence of numbers, and conveys useful information. To automate it, consider building version “patching” functionality into your build system (it is a simple matter of determining the version number and then overwriting all the AssemblyVersion and AssemblyFileVersion attributes you can find). Alternatively, you can cheat and use TeamCity’s AssemblyInfo Patcher. By automating the version numbering convention via your build system, your developers will no longer have to spend time manually incrementing version numbers in code.

Don’t let your version numbers get overly dramatic. They’re just not worth the trouble.

Minimize Environmental DependenciesHopefully we’re not beating a dead horse here, but as we’ve stated in several other articles, one of your goals with software development should be to minimize environmental dependencies. This goes for everything from the environment required to build your software from source code into deployable binaries, to the requirements of the computers that will run your application.

Many people think that this is just some pie in the sky idea that sounds good on paper, but isn’t worth pursuing in practice. “Just install the the Microsoft ReportViewer Redistributable on every machine that needs to run my app, it’s just one little installation prerequisite, what’s the problem?” That attitude is an incredibly slippery slope that will grant you a free one-way ticket to build & deployment hell if you let it happen.

To help further drive the point home about why you should always strive to minimize environmental dependencies for your applications, here’s a little horror story about some real world pain and suffering caused by not subscribing to this philosophy.

At one point in my career I worked for a firm that had to maintain several legacy versions of their software due to contractual obligations. Developers would occasionally have to make bug fixes to these legacy versions of the code base, and being on the “build team”, I had to produce the deployable artifacts of this legacy software. At the time, there was only one computer left in the whole company that was capable of compiling this legacy software… and it was literally under someone’s desk in the middle of the office. The fate of the legacy versions of this software relied on a little old PC on the floor.

Naturally, being the good build team that we were, my team and I decided to minimize this single-system-risk by cataloging all of the dependencies we could find on this computer that were necessary to build the software and try to reproduce this “build server” on a virtual machine. Try as we might, we could not reproduce “the little workstation that could”. Several of us tried for weeks, pawing through the installed programs, shoveling through mountains of registry keys… we just couldn’t figure out what was so special about this “build server”. We even reached out to one of the (then working remotely) employees that helped create this original build server, but he couldn’t remember what made it special either.

Then, as fate would have it, the build server started having hardware issues. It was reporting hardware errors in the Event Log left and right until one day it just refused to power on. Our sysadmins said it was a motherboard failure. Normally a motherboard failure on a workstation is just a minor inconvenience – just throw it out and swap in a new one. But given this one magical workstation was critical to creating the release versions of this legacy application, this motherboard failure became a Priority 1 Big Problem ™.

So to be able to keep releasing software, this is essentially what we did:

This could be your build server

I literally had to plug in that “build server’s” hard drive into a spare workstation via some external harddrive adapter just to boot off of it just so I could compile the release version of this legacy application.

Obviously this is an extreme case, but hopefully it helps illustrate our point about why you should minimize external dependencies for building and running your software. By doing so, you ensure that your application, a.k.a. your organization’s intellectual property, will be able to be recreated from its source code almost indefinitely.

Build Server Best Practices

Categories: Build, Philosophy, Sysadmin
Comments: No

Build Server Best PracticesJust about every organization that does in-house software development will inevitably reach the point where they need a build server. Hopefully prior to that point you’ve already managed to either create or start the foundation for a well thought out build system. Regardless, at some point the development team will need a continuous integration (CI) system to automate, especially when multiple developers are touching the same code and components. Just remember you don’t need a build machine or CI server to have a quality build system.

While the particular CI system you choose is important, that is not the focus for today’s article. Instead, we’re going to lay out some best practices you should be following for the build server you use for hosting your CI system.

Minimize Environmental Dependencies

Minimize Environmental DependenciesThis is really a concern for your build system, but it’s worth reiterating here. Ideally, your build system should not require that anything special be installed on the machine where it runs be it your build server or any developer workstation. My minimizing environmental dependencies you can greatly simplify the build out process for your build servers and even developer workstations. This makes operations’ lives easier when setting up new or additional build servers.

One key technique towards achieving this separation is making sure that all dependencies required to build your applications from source code to deployable binary artifacts live in your source control system and are versioned right alongside the source code itself. This includes your build system as well. Then at build time your CI system simply pulls down the source code from source control along with all required dependencies and fires off your build system. No muss, no fuss. This also makes it easier to ensure that the process the CI system uses to build your software is the same process any developer would use.

Restrict Access

Restrict AccessThis can be a touchy subject, but I’ll come right out and say it: developers should not have access to build servers beyond read-only access for logs, artifacts, etc. Your operations staff and/or any Devopelers should be the only ones allowed to interactively login to any build servers or be able to change anything on any build servers. This implies that operations staff should have a fairly deep understanding of build systems, CI systems, etc. as well.

This may be too broad of a generalization, but most developers are good at getting things to work. That’s what makes them good developers. Unfortunately when it comes to system administration style tasks, they tend to be really bad at making documentation a priority. And as we’ll see in a bit, documentation around build servers is critical.

By restricting access to build servers, you can guarantee you have an independent, theoretically clean environment for building your applications from source code. This helps ensure that builds are repeatable and have not been tampered with. You can even take your process a step further and only release binaries to production that were produced by your CI system on your build server, which should be one of your ultimate goals. By doing so, you can guarantee all production source code was checked in to your source control system.

It’s worth noting that when we say “restrict access” we’re talking about access to the actual host that is running your CI system. Access to the CI system’s own configuration should be handled separately and open to developers in a self service fashion as much as possible.

Document Everything!

Document Everything!From the moment you begin setting up your CI server, you must begin religiously documenting it. Record every detail of what you installed, what version you used, how you configured it, any tweaks you made, any directories you created, etc. Anything you change beyond the base OS installation you have must be recorded in a well known document and disseminated amongst your development and operations teams.

Ideally your build system helped minimize all of the environmental dependencies and this document should be fairly lightweight. The goal of this documentation is to ensure that your organization can reliably reproduce any of its applications from source code to deployable binary artifacts until the end of time. Be sure to validate the documentation from time-to-time as well by spinning up new/replacement build servers and making sure they still build your source code as expected.

If you can automate your build server creation process, even better. Just make sure it’s a well defined process first!

Plan For Growth

Plan For GrowthAs you define your process and get more and more projects/applications into your CI system, you’re going to begin to push the limits of your CI system’s underlying host. Treat it like a “tier 2” production service: monitor and track its CPU usage, memory usage, and especially its disk usage. Disk usage doesn’t just include disk space, it also includes things like disk queue lengths.

A busy CI system can easily chew through disk spindles on your build server. It’s doing a lot of repetitive random access tasks: pull down lots of little source files from source control to the local disk, compile those files, execute tests, then eventually wipe out that build’s working directory. A typical CI system in an organization with multiple developers committing source code changes multiple times a day will trigger dozens if not hundreds of CI builds. Work with your sysadmins to come up with an optimal disk solution which will typically involve multiple disk spindles setup in some sort of RAID configuration. Or shoot for SSDs if you can!

Considering how I/O intensive CI systems can be, we’ve had less than desirable results using virtual machines as build servers, but your mileage may vary. Either way, as your build server’s usage grows, think about expanding to using multiple build servers as part of a single build farm. Many CI systems support this concept of build farms with multiple “build agents”.

So in conclusion, some key build server best practices are:

  • Minimize Environmental Dependencies
  • Restrict Access
  • Document Everything!
  • Plan For Growth

With a solid build server running your CI system, you’ll have happy developers and fast, smooth release builds.

Branch for Release (and nothing else)Branching in source control can be a rather controversial topic. We are unlikely to settle the matter today, but we want to share our philosophy, which is based on some basic principles and some hard-learned lessons.

Why branch?

The motivation behind creating a branch is the desire to isolate a copy of the code from changes being made by other developers. In that sense, the most basic branch is an individual developer’s working copy (we certainly don’t want to work on the same set of shared files). In fact, in distributed version control systems, you must create a “proper” branch (by cloning the repository) before you can create a working copy. From this point, we will exclude “working copy” branches from our discussion, as I think everyone agrees that they are a Good Thing.

Developers create branches for two basic reasons:

  1. Isolating code that is/will be released to production (a release branch)
  2. Isolating code for the purposes of long-term feature development (a feature branch)

In the context of a small to mid-size enterprise, we believe that development teams should regularly make release branches, and never (or perhaps very rarely) create feature branches. We believe in trunk-based development and the branch-for-release pattern.

Why release branches are good

Release branches ensure that you have easy access to the code that you are currently using in production. They allow you to make isolated bug fixes and rapidly roll out “patches” to production without the risk of a fresh rollout from trunk. They are your most important bulwark against “losing” code that your business depends on. As we have discussed before, we strongly encourage following a regular schedule for deployments to ensure that you are getting your newest code in front of your users, gathering feedback, and keeping the agile cycle turning. “Cutting” a fresh release branch from trunk should be a simple procedure (not necessarily an automated one, though — if you find that the manual process is too cumbersome, consider simplifying the process before automating), and you should make it part of your weekly (or bi-weekly, or daily) release routine.

To make a regular release procedure work, consider adopting the following practices:

  • Make sure the entire team knows when the branch will be cut. This is all the more reason to cut on a regular schedule, so that the team internalizes the “rhythm” of the release process.
  • Avoid re-using the same release branch and “merging” changes up to it. This is a recipe for disaster if the merge gets messed up. It is much less risky to “cut” a fresh branch every time, because you are guaranteed a branch that looks just like trunk did at the time of the cut.
  • Adopt a standard naming convention for release branches. Our preference is to use branches that include the application name and the date in YYYY-MM-DD format. In subversion, you would create a directory like MyApp/releases/2013-05-17. A standard convention will help your team know where to go when they need to find a particular release branch.
  • Create a CI build for each new release branch. In the same way that we prefer a fresh branch for each release, a fresh build is a good idea as well. You should keep your old release builds around until software built from them is no longer in production, then retire them.
  • Only release software to production if it was built off a release branch. This may seem obvious, but is the linchpin to a successful branch-for-release workflow. Resist the temptation to throw trunk software out there “just this once”, because doing so will make life harder on everyone who needs to maintain that release.

Why feature branches are bad

Feature branches differ from release branches in one very important way — a feature branch is intended to be merged back into trunk at some point, while a release branch will never be merged (it will diverge from trunk until it is no longer needed, at which point it can simply be abandoned or deleted). This means that the basic purpose of a feature branch is to delay integration of the work being done there. This is the fundamental problem with feature branches. They represent a development pattern that is the complete opposite of what we are trying to achieve with continuous integration. Remember, CI is more than just having an automated build, it is the principle that all work should be integrated into a shared mainline as soon as possible.

Let’s review some of the benefits of following CI:

  • The smaller each change to the mainline is, the easier it is to merge changes. Ideally, merging is rarely required, and is trivial when it is.
  • There is always a single “latest” version of the software that all developers are working on. This version should always work and ideally be releasable to production.
  • It encourages incremental change and progress. To continuously integrate, developers must find ways to split large features into small chunks that can be committed to the trunk. This is a good forcing mechanism for the creation of a well-factored code base.
  • Code checked in to trunk will trigger a continuous build and a run of the automated test suite, ensuring that the trunk is always compiling and unit-tested.
  • If you follow branch-for-release, it ensures that work gets deployed to production quickly and the feedback loop from your users is delayed as little as possible.

Now, consider how feature branching subverts those benefits:

  • When the feature branch is complete, there will be a “big merge” at the end, which could lead to a painful merge process and the risk that the merge is not done correctly.
  • There are now multiple “latest” versions of the software. Your team is no longer sharing a single code base.
  • It discourages incremental change. In fact, it encourages broad, sweeping changes across the whole code base to get a feature done, as opposed to the incremental approach which would require the creation of well-factored interfaces and abstractions.
  • You do not get the benefit of the continuous build unless you create a build for every feature branch.
  • Since it delays integration and release, it delays your ability to get feedback from your users.

A common objection to the anti-feature-branch stance is that “you just feel that way because branching is hard in subversion — come over here and see the light of git!” I grant that branching and merging are much cheaper and easier under git than they are under subversion. However, just because something is cheap and easy doesn’t make it right. Feature branching fundamentally breaks the continuous integration workflow, no matter how easy the “big merge” is at the end.

We hope you’ll consider our point-of-view on this and think about religiously branching for release and avoiding feature branches like the plague. Care to disagree? Let us know in the comments.