Tag Archives: tdd

If you have ever been in an Agile project, or something that looks like it, you should have heard about the concept of spikes.

For those who haven’t, spikes are the agile response to reducing technical risk in a project. In case you are not sure how to build something or fear that some piece of functionality might take much more than expected, and you have to gain more confidence about it, it’s time to run a spike. Usually timeboxed, spikes are technical investigations with the objective of clarifying a certain aspect of the project.

In my current team, we are developing a few tools that are quite specific to our context and were not sure on how to solve a few issues, so we have been quite frequently playing spike cards.

This is all not new and I’m sure most of you have done that before. If you did the way I used to do, you would write a spike card, something as “investigate technology X for problem Y“, would spend 2 days doing it and would have a response for it in your head once you were finished.

In our current context, team members were rotating quite quickly, so we were worried that any knowledge we would get from spikes could have been lost if it was just…let’s say.. in our heads.

Not wanting jut to write the findings up, as we first thought about doing, we decided to tackle the problem with the golden hammer of agile development: tests!

So, instead of writing tests to decide how we should write our code, we started writing tests to state the assumptions we had about the things we were investigating, being able to verify them (or not) and have an executable documentation to show to other people.

For example, here is some code we wrote to investigate how ActiveRecord would work in different situations:

it 'should execute specific migration' do
  table_exists?("products", @db_name).should be_true
  table_exists?("items", @db_name).should be_false
it 'should execute migrations to a specific version' do
  ActiveRecord::Migrator.migrate(ActiveRecord::Migrator.migrations_paths, 02) { |migration| true }
  table_exists?("products", @db_name).should be_true
  table_exists?("items", @db_name).should be_true
  table_exists?("customers", @db_name).should be_false
it 'should not execute following migrations if one of them fails' do
    ActiveRecord::Migrator.migrate(ActiveRecord::Migrator.migrations_paths, nil) { |migration| true }
  rescue StandardError => e
      puts "Error: #{e.inspect}"
  table_exists?("invalid", @db_name).should be_true
  m =, ActiveRecord::Migrator.migrations_paths, nil)
  m.current_version.should == 3
  table_exists?("products", @db_name).should be_true
  table_exists?("items", @db_name).should be_true
  table_exists?("customers", @db_name).should be_true
  table_exists?("another_table", @db_name).should be_false

We have used this technique just a few times and I won’t guarantee it will always be the best option, but so far the result for us is having code that can be executed, demonstrated and easily extended by others, making it easier to transfer knowledge between our team.


Recently I’ve started working often with Puppet, using it to provision environments for the project I’m working on. One of the things I’ve quickly realised when using it was how long the feedback loop between committing code and actually verifying that the manifest was working appropriately. In my situation, it would be something like this:

  1. Work on puppet manifests, making a few changes
  2. Commit code to repository
  3. Wait for build to finish, which just verified for correct syntax
  4. Wait for latest version to be published on the puppet master
  5. Wait for next sync between master and client
  6. Check that configuration was applied correctly on the client

As you can see, not very simple. If you also consider that I am not very experienced with puppet, you can imagine how I ended up having to retry things in this very long loop, which can end up with anyone’s patience.

Testing Infrastructure Code

Coming from a development background and being used to having very fast feedback about code that I write made me go into a search for testing tools that could ease my pain.

Unfortunately most of the tools I’ve found where not ideal since they focused on unit testing code, as rspec-puppet. Not sure what others think of it, but in the case of puppet manifests and chef recipes, unit testing doesn’t make much sense to me, since there is no code to be executed, and tests end up looking like some version of reverse programming, where you just assert what you wanted to write, but it doesn’t guarantee that the code actually works.

Introducing Toft

Luckily, one of the options I’ve found was Toft, which is a library aimed writing integration tests for infrastructure code using Linux containers. The main idea is that you write cucumber features verifying what you expect the target machine to have (packages, files, folders, etc..) and Toft starts a linux container, applies your configuration and runs your tests against it.

It also can be run from a vagrant box, so you can have your tests running on your mac, which is quite handy.

Features can be created using normal Cucumber steps, and mostly rely on ssh’ing into the target machine and verifying what’s going on in it, so are quite easy to extend and adapt to your needs. Here is an example of a feature verifying if a specific package has been installed.

Scenario: Check that package was installed on centos box
Given I have a clean running node n1
When I run puppet manifest “manifests/test_install.pp” on node “n1”
Then Node “n1” should have package “zip” installed in the centos box

We’ve started using it in our team when writing new manifests and have also setup a ci build with it, which is quite useful to guarantee our manifests still work over time.

Toft is still in its beginning but I believe it has quite a lot of potential. If you are using chef or puppet you should definitely check it out at: