Archive | Software Quality RSS for this section

Pairing in History

I was reminded of a post I made after visiting  Bletchley Park and the National Museum of Computing some years ago and realised it needed moving to my own blog. So here it is:

While visiting  Bletchley Park and the National Museum of Computing  I discovered some examples in history when pairing really worked.

The first example was in the “tunny room”. Messages from the German high command were transcribed here during World War II for processing in the decryption machines, after being intercepted at Knockholt. When a transmission was received on the incoming telegraph, it was transcribed onto tape feed by two operatives so that it could be fed into the machine. As the tour guide pointed out

“two girls were used for this, with the assumption being no two girls will make the same mistake”.

Reducing mistakes is one of the key benefits of pairing in software development teams, and prevents defects appearing later down stream. In the case of the Bletchley workers, the letters they were typing would have been Lorenzo code, a raw ciphertext received via headphones, so a misplaced letter could have thrown the decryption out completely, causing delays and misinformation.

The second pairing example was in the post war section of the museum, home to a restoration project for a 1950s punch card archiving system. The room, which does the work that a database might do today, is full of oily, complicated looking machinery. The first machine was the puncher. This was used to mark the cards with the data that needed storing – in the example at the museum, these were orders for a sales team. The pairing occurred here, at the point of data entry. Two operatives would have slightly different machines; one would punch a hole slightly above the number mark on the card, and one slightly below. A valid mark was therefore represented by an oval shape where the two punches intersected. A second machine, the verifier, was then used to ensure there were no errors on the cards. It detected single holes in the punch cards and flagged these as errors by outputting the values on a pink card. Cards that only had oval holes went onto the next stage. We still use pink cards on our Kanban wall to represent defects.

In this example the pairing was inherently built into the system through the mechanics of the machines involved and must have prevented a great deal of erroneous data getting through.

So, if pairing was so useful in during the war, and built into machines in the 50s, why isn’t it the de facto standard for software development today? Why hasn’t it simply grown in popularity since the 50s? I’d not heard of pair programming until I’d been working in the industry for 2 years, even having been at university for 3. There are many benefits to pairing in addition to the reduced defect rate, yet the practise doesn’t seem to have stuck around.

There does seem to be a resurgence of interest in pairing as agile software development gathers pace and I’m sure most of the luminaries of the agile world have been doing it for years. But why hasn’t it always been wide-spread?

Tests Aren’t Just For Testing: Stop Using The Debugger

We had a successful final hour of the day today. Our task for the past few weeks has been to read extra data from the XML metadata files we receive from the major music labels. One of the major labels provides their data with slightly askew logic which is different to all of the others. They prefer to omit an element to show that a given release no longer has has the value previously in that element – this requires us to know the previous state of a release to compare with the changes. This is quite a common computing problem but, when working with the type of legacy code project that we do, it can become confusing.

We’d been trying to solve this new problem for most of the day and almost had it complete, except two tests still needed to pass. We couldn’t get the logic right so that both would pass, it was either one or the other – as these were acceptance tests too, the amount of code they covered was quite large. Add to this, that we’re working on a VB.NET project that hasn’t been updated for around 4 years and you’ll see why frustration started to build.

Debugging

My team mates Tony and Will had been trying to find out what was going on using the debugger, and hadn’t managed to sort it out when they asked for an additional pair of eyes. I heard about their efforts, and briefly looked at the code, but immideiately thought back to my days spent with Agile Coach Kevin Rutherford. Kevin used to tut disapprovingly every time we started the debugger, suggesting that we were just missing a test instead. I remember thinking that the fact we were missing a test was quite obvious but the steps to take to find that missing test were the hard part.

The Saff Squeeze

Today we used techniques from Kent Beck’s writings on the “Saff Squeeze”. Tony posted a quick blog about the resources we found after we’d solved the problem – it turns out we’d followed the process reasonably closely. We identified in the controller method two loops that could be extracted into pure functions (J.B Rainsberger has helped us a lot in our legacy code adventure). Once extracted we made these public and static (so they used no internal state) and we could then write a test directly of this functional part of the code. Passing in a couple of simple int arrays proved the code worked as expected so we moved onto the next loop. We followed the same process again, and a couple of simple tests asserted that these functions were doing  what we wanted. This meant the problem must be with the caller, and more specifically, the value of the parameters we were passing in.

A quick look at the formulation of the variables used as parameters, and another quick change to the parameters of the tests proved that the problem was with the parameters. We’d filtered our input list incorrectly, forgetting to coalesce nulls to false as we’d missed one of the edge cases. We had tests for this class but missed this edge case.

So, with that in mind, we wrote a new test to ensure the nulls we replaced with a value of false, the parameters to the extracted functions were now correct and our two acceptance tests passed. After a bit of refactoring and exploratory testing tomorrow this MMF will be done.

Test Rather Than Debug

If I’d just strolled over to my team mates and said “stop using the debugger and write more tests” they’d have just ignored me. By breaking down the code and isolating the logic into pure functions, which we could then test, we’ve seen directly the value we got by adding these tests. The best part of it all too, is that these tests will now last for the lifetime of this code – the debugging session, once over, can add no further value.

Kevin also used to say “if you think a private method should be public, you’re probably missing a class”, and he’d be right about the two methods we extracted here. The two methods we made public so that we could test were already static, so following Fowler’s “Move Method” refactor we could safely move these to their own respective classes. These two classes now have their own behaviour, helping us meet the Single Responsibility Principle and use Dependency Injection.

Hopefully we’ll see this quick episode today as a good reason to use this technique again, and I thought I’d blog about it as evidence for others too.

A Sea of Blue: Spotting Abstraction with a Theme

The other week I inadvertently coined a new phrase. It may actually be two phrases and comes from our recent concentration on abstraction in our controllers and main calculation engines. I just thought they were one of the silly things I say to help turn programming concepts into easier to use language – much like we use design patterns to communicate intent when pairing or discussing architecture. I was encouraged to blog about it by my colleague Shaun, who thinks this might go global.

Phrase 1: “A Sea Of Blue”

This is what we see when we open up a controller or engine that has a number of service objects injected in the constructor. The name came from the fact that we work in C#, in visual studio and our theme colours interfaces in light blue. It is generally regarded as a good thing as the class in question will be more testable as the services you don’t need can be easily mocked. We can return specific messages from some services, verify that others are called in a specific way and provide a mock with no setup/verification to skip over unwanted behaviour.

We’ve found that sometimes the sea of blue means that you have a long parameter list. We usually combat this by defining DDD style aggregates for service objects with related behaviour and injecting an aggregate service instead – maybe these are more a lake of blue – or maybe I’m getting carried away.

Phrase 2: “Making a Bluey”

This one is probably less likely to catch on, especially as most people think of something completely different when you say “bluey”. In other words, extracting an interface, or Ctrl+Shift+R and “Extract Interface” if you want to be fancy. We’ve had discussion recently whether we sometimes do this too soon – do you need an abstraction unless you need two concrete implementations? I often argue that the second implementation you have is the mock implementation you use in tests. My colleague (Shaun again) blogged about the way we should mock roles rather than types and one of things we’ve discussed before is that you should understand why you use a tool like Moq by actually implementing a mock/stub of the class rather than using Moq. This makes it more obvious that you now have two versions of the abstraction in existence and therefore the previous argument no longer stands.

I like the concept of emergent design and the use of test driven design to define the api of a class. I think this extraction of an interface, so that it can be mocked effectively and used without worrying about the concrete implementation, is an effect of the tests driving the code. The resultant api is cleaner, the classes can be well tested and your code can follow the SOLID principles.

The Theme

I’ll get our very latest version of the theme available soon on the company blog but for now you can try the old version, which works well in Visual Studio 2008 but because of the open type font restrictions in WPF and therefore Visual Studio 2010 you can no longer use inconsolata and some of the colours need updating:

http://codeweavers.wordpress.com/2010/05/27/a-better-colour-scheme-for-visual-studio/

Our new theme includes updates for 2010 and resharper 5 too.