All posts in Deployment

Deployment ConsistencyWe’re not sure where it came from, but there seems to be this general attitude among developers and operations staff – it’s acceptable for developers to pay minimal attention to how their applications are deployed and it’s acceptable for operations staff to dance around the issue by cooking up all manner of automation and configuration solutions. I like to think that many people on the ops side of the fence are just as clever and creative (if not more so) than their dev counterparts – many of them are just developers interested in solving different classes of problems. However, when developers embrace application deployment as a feature of their software, a lot of other best practices “just happen”.

This may sound obvious, but when developers take ownership of their application’s deployment process, they understand their application’s deployment process. This completely closes the knowledge gap between how developers deploy their application in a test environment and how operations deploy that same application in the production environment. This creates a sort of “horizontal consistency” in the deployment process between the development team and the operations team. Remember, consistency is a key component of a high quality deployment process.

When developers consider application deployment at every stage of development – instead of just at the end – they’re able to shake out many common issues ahead of time in their own local testing/development environment instead of in production. Issues like forgetting to set a Visual Studio project’s assembly reference to “Copy Local” so a necessary DLL doesn’t get shipped with an application’s deployable artifacts become a thing of the past. Imagine if that same problem reached production – the operations team deploys the application, but it crashes at start-up with an error like “The referenced assembly ‘Foo’ could not be resolved”. That issue would be reported as an application bug and the release would have to be rolled back. Had the developer thought about the deployment process and been able to perform the deployment exactly how the operations team performed it, that bug would’ve been caught immediately and likely fixed just as quickly.

Also, in many organizations, as developers begin to share components for common application subsystems like data access, messaging, UIs, etc. they tend to form common shared libraries for these common subsystems. These shared libraries can then be referenced by other applications to enable teams to more quickly build specific applications. By having developers consider application deployment as a feature of the application as well, the same effect begins to occur. Common deployment logic and design begins to coalesce into shared components. Then it becomes even easier to re-use the deployment logic in other applications that are of a similar “class”, like Windows Services for example. This creates a sort of “vertical consistency” between applications in your portfolio.

So if we combine this “vertical consistency” (consistent deployment processes across applications) with the “horizontal consistency” we mentioned earlier (consistent deployment processes between the development team and the operations team), you’ll see a plot emerge as seen in the chart at the beginning of the article. Then as you can see, the sweet spot you want to aim for is as close to the intersection of those two “consistency axes” as possible. Of course in reality reaching the absolute center of that chart may be impossible for certain classes of applications, but it’s still a helpful guideline to keep in mind when designing and improving your application deployment procedures.

As developers begin to embrace application deployment as a feature of their software, instead of thinking of it as a problem for the operations team to automate away, you’ll see other best practices – like we’ve listed here – begin to fall into place without even actively trying to do so.

Do Not Fear Build Breaks

Categories: Deployment, Philosophy
Comments: No

Do Not Fear Build BreaksPretty much any organization that does internal software development these days does so with a team of multiple developers. Your development team breaks down a project into tasks that can be done in parallel, much like how those multi-core CPUs in your computers allow for processes and threads to work in parallel. The goal is to get more work done in less time (theoretically).

And just like how a multi-threaded application needs some mechanism to synchronize its actions, developers synchronize their work using a version control system. Then they validate that work using a continuous integration system to compile, analyze, and test their changes to the code base.

Of course all of your developers follow best practices, like making sure their code changes compile locally, all tests pass, etc. This is easy for them to do because of your well thought out build system (right?!). But every now and then a change makes it in that… BREAKS THE BUILD. Someone checked in a change that caused the compilation to fail or a test to fail, causing your continuous integration system’s status board to go red, error mails to fly out, alarms to start going off, sirens to blare… before you know it, dogs and cats are living together and there is mass hysteria.

But is it really that big of a deal?

The whole point of a continuous integration system is in its name: to continuously integrate your application’s code base. It is there to catch integration problems as early as possible. Of course there are exceptions, but most of the time a build break should be seen as nothing more than a minor inconvenience. You and your team should want builds to break. A build break is great news, it means a problem was caught right away.

And even if you didn’t configure your continuous integration system to alert other developers as loudly as possible, with proper communication within the team, you shouldn’t be afraid to let everyone else know what happened. Raise your hand, admit the build break may have been due to your change, start investigating the issue, and start working on a resolution if necessary. Take ownership of the build break and try to fix it. Relax and take a deep breath. It’s not the end of the world, it’s just your continuous integration system fulfilling its purpose.

Of course if you’re breaking the build nearly every time you commit your code changes, it may be time to review your own local development practices. But don’t let build breaks scare you or your team into implementing some draconian, complicated branching strategy to isolate changes. Remember, every time you branch you have to merge. Every time you create a branch, you make a conscious decision to delay integration. And delaying integration undermines your continuous integration system’s entire purpose: which again, is to identify problems with the code base – compilation errors, testing errors, etc. – as quickly as possible.

Keep your branch model simple. By following an SCM pattern line like the Mainline pattern and relying on your continuous integration system to validate and integrate your changes, you reduce your team’s merging and synchronization efforts.

You want to experience build breaks; they are a normal part of a healthy application life-cycle. Do not fear build breaks!

A DevOps Checklist

Categories: Configuration, Deployment, Philosophy
Comments: 1

A DevOps ChecklistThis will sound hypocritical once you read this article, but if you couldn’t tell by now, we’re not fans of checklists here at DevOps on Windows. Checklists can easily mislead people into a false sense of security. “We’re following the checklist – we must be DevOps now! But why do we still have so many production issues?!”

As we’ve said before, to us DevOps isn’t about following some rote methodology, but about understanding the principles behind operations-friendly software and following best practices to move your processes forward.

But we also understand that sometimes a “checklist-ish-type-of-list” can be a helpful guide. With that in mind, here’s our “DevOps Checklist”!

This entire checklist can be boiled down to our first principle of DevOps: is your software simple to operate and easy to change?

Minimize Environmental DependenciesHopefully we’re not beating a dead horse here, but as we’ve stated in several other articles, one of your goals with software development should be to minimize environmental dependencies. This goes for everything from the environment required to build your software from source code into deployable binaries, to the requirements of the computers that will run your application.

Many people think that this is just some pie in the sky idea that sounds good on paper, but isn’t worth pursuing in practice. “Just install the the Microsoft ReportViewer Redistributable on every machine that needs to run my app, it’s just one little installation prerequisite, what’s the problem?” That attitude is an incredibly slippery slope that will grant you a free one-way ticket to build & deployment hell if you let it happen.

To help further drive the point home about why you should always strive to minimize environmental dependencies for your applications, here’s a little horror story about some real world pain and suffering caused by not subscribing to this philosophy.

At one point in my career I worked for a firm that had to maintain several legacy versions of their software due to contractual obligations. Developers would occasionally have to make bug fixes to these legacy versions of the code base, and being on the “build team”, I had to produce the deployable artifacts of this legacy software. At the time, there was only one computer left in the whole company that was capable of compiling this legacy software… and it was literally under someone’s desk in the middle of the office. The fate of the legacy versions of this software relied on a little old PC on the floor.

Naturally, being the good build team that we were, my team and I decided to minimize this single-system-risk by cataloging all of the dependencies we could find on this computer that were necessary to build the software and try to reproduce this “build server” on a virtual machine. Try as we might, we could not reproduce “the little workstation that could”. Several of us tried for weeks, pawing through the installed programs, shoveling through mountains of registry keys… we just couldn’t figure out what was so special about this “build server”. We even reached out to one of the (then working remotely) employees that helped create this original build server, but he couldn’t remember what made it special either.

Then, as fate would have it, the build server started having hardware issues. It was reporting hardware errors in the Event Log left and right until one day it just refused to power on. Our sysadmins said it was a motherboard failure. Normally a motherboard failure on a workstation is just a minor inconvenience – just throw it out and swap in a new one. But given this one magical workstation was critical to creating the release versions of this legacy application, this motherboard failure became a Priority 1 Big Problem ™.

So to be able to keep releasing software, this is essentially what we did:

This could be your build server

I literally had to plug in that “build server’s” hard drive into a spare workstation via some external harddrive adapter just to boot off of it just so I could compile the release version of this legacy application.

Obviously this is an extreme case, but hopefully it helps illustrate our point about why you should minimize external dependencies for building and running your software. By doing so, you ensure that your application, a.k.a. your organization’s intellectual property, will be able to be recreated from its source code almost indefinitely.