In this post I’d like to highlight a few features in Bamboo related to… um… “re-running”. So, what is rerunning and why do I find it interesting enough to write up a blog post for it?
Let’s consider a hypothetical situation, where you – The Developer – are supposed to implement yet another small&shiny feature in the application you’re working on with your colleagues. At some point in time you create a feature-branch for your code, you start working on the feature and some time later you push your code to the central repository. The only thing you don’t do is to run regression test suite – because running tests is what computers are meant to do, and your Bamboo CI server, powered with Amazons Elastic Instances, can run full test suite faster than you could do locally. For example, while developing Bamboo it takes ~30 minutes for our team’s Bamboo to checkout, test and notify me about my commit breakages. Instead doing that manually I can spend the time saved to do some code review, or have a chocolate. And if everything goes OK, the changes pass the CI tests and one can merge stuff to the master branch safely. But that’s not an interesting case.
A failed build
Let’s consider a different outcome – your branch build in Bamboo fails, you get the notification that something went wrong and you feel the obligation to fix the issue. You might suspect that your innocent changes could have broken some test in a not-so-related code area. But after examining the build logs from Bamboo and the test stacktraces you’re no longer sure if there was indeed a breakage, or was it some infrastructure problem. Or maybe the test is flaky and is failing intermittently? Of course, in ideal development environment there is no place for flaky tests or infrastructure problems, but… things happens, and it’s worth trying to check that possibility before committing another hour fixing problems that do not exist. So, you go to your Bamboo and search for the function of re-running a particular build, especially whatever has failed. Fortunately such function exists:
After clicking “Rerun failed/incompete Jobs only” Bamboo will re-run my failing test. Depending on how many Jobs will it have to re-run I might be able to have another chocolate. I might be not, but at least I don’t have to run the failed tests manually and I can focus on something different. After another ~10 minutes I obtain the feedback: either these failures were intermittent (hurray!) and now are green, or – if the failures are exactly the same – I’m quite sure I have broken something, so I should start deeper investigation of what have I done wrongly.
Let’s consider another scenario. You created a new branch for your work, started implementing that shiny new feature, you even wrote some tests, and are ready to integrate your work with your colleagues. However you would like to perform a dry-run of your changes before merging them to master. Fortunately Bamboo can be helpful: it can execute your full CI suite on a ‘virtual’ merge between your branch and master (it’s called the ‘Gatekeeper’ option in Bamboo). Depending on the build outcome you can decide then whether to merge your code upstream, or to continue working on the branch.
So let’s assume that your changes got tested by Bamboo and it says “all tests passes, you can merge”. But, as you don’t want to upset your colleagues with low-quality code on the master branch you decide to raise a code review firstly. A little bit later (and it can be a few days later if your team is distributed around the world) the code review finishes and you receive some improvement suggestions. As they seem innocent you decide to implement them. You do the commit (still on your branch), push it and await Bamboo approval. And just when you’re ready for the final merge, the Bamboo goes red – there is a failure detected.
And indeed, after some investigating you find that the ‘improvement’ suggestions can’t work at all. Good that your team had these pesky unit tests written in the past, otherwise you would accidentally introduce a bug into your application. So after a little thinking you decide to merge your feature without those ‘improvements’ at all. But, during the code review cycle other developers made commits to the master branch, so your branch could go stale. It would be nice if you could rerun tests on the code merged between the already tested revision from your branch and the current head of the master:
Of course you could do a reverse commit on your branch to rollback your changes (or even branch off your branch), but on the other hand you can instruct Bamboo to run your branch with a custom revision and keep your repository one step cleaner:
That way you can order Bamboo to run the tests against the merge between master and your pre code review version of feature. If it goes green – you can safely merge to the master (or configure Bamboo to do that for you).
Rerun whole build
However, let’s consider a more interesting scenario where the above build turns red – some failure occurred on the merged code. So you investigate build logs to see how your changes on your branch could interfere when combined with the newest changes from the master. And, to your puzzlement, you notice a few completely unrelated tests are failing. Stacktraces from the build log could be interpreted that it is infrastructure problem (let’s imagine the failed tests are relying on some external service, like for example they test interoperability with Google Talk apps). If only there was a way to rerun your previous build, the one that finished as green (successful) – maybe it would fail now too?
Fortunately, since Bamboo 4.3, you can rerun existing builds easily:
The build will be run using exactly the same revisions as the ‘original’ one. If it finishes successfully we can be nearly sure it is indeed our change that broke the tests. But if the rerun build finishes as red, with the same symptoms as the one combined with the newest changes from master – then we can be nearly sure that our change is ‘clean’ and did not introduce any regression. So we can safely merge and push our changes to master, instead of wasting our time investigating errors we didn’t actually cause and have no clue how to fix. And have another chocolate in the meanwhile.
The ‘re-run’ facility, while definitely as a trivial one, can come very handy in your day-to-day development situations. As an example I’ve tried to highlight its usage while investigating the roots of failing tests (especially the dreadful, “flakey” ones). However, re-running builds can be also very useful in other activities as well – deploying comes to my mind as first.
How about you? Do you find this feature to be fitting somewhere in your project pipelines and processes? Where? Or is it a completely superfluous eye-candy feature, that one can live without?