December 29, 2008, 11:39 AM — The first big dirty secret of coding is that to do it well, you need to spend as much time coding around your core code as you spend, coding the core code itself.
Wow. That badly needs to be rephrased doesn't it? There is a Limerick or tongue twister lurking in there somewhere for sure. Let us try this re-factoring: code is a big lump of unwieldy text. Managing that code base requires tools. Those tools are written in...yup...more code. Testing the core code base requires tests. Those tests are written in...yup...more code. Code manages code. Code tests code. If you are fond of infinite regress you then worry about code that tests the code that tests the code...And so it goes. Code everywhere. An infestation of the stuff.
The second big dirty secret of coding is that testing and tooling take a lot of time and a lot of resources and a lot of smarts to do well. As an industry, I believe informatics is really struggling with that combination of attributes. The price we must pay for the power of testing is high. Everybody knows that testing is good for quality. Everybody knows that test-driven development/agile methods have much to contribute to the state of the computing arts. Everybody knows that doing it right will impact costs and timelines for shipping
Now, faced with this sort of reality, it is the job of management types to probe for opportunities to reduce costs and timescales. A classic mechanism for so doing is to outsource non-core competencies. "Is it possible to contract out the testing?", they say. "Can we get freelancers to blitz it while the core team spends its time moving the code base forward?", they ask.
This is where it gets tricky. Testing, as mentioned earlier, is not a from-the-neck-down activity. It requires domain smarts. Doing it well involves being intimate with the problem domain, intimate with the code under test. Intimate with the development team. I have seen more than one attempt at outsourcing testing fall down because the core team spent its time explaining the domain and the code base to the externally contracted testing team.
There is one chink of light on the horizon that I can see. A chink that hopefully gets bigger as we head into 2009. A big chunk of a good testing strategy involved working out what to test. A big chunk of that activity involves working out what parts of the code base are being exercised by the existing tests and which are not. In an ideal world, every line of code (in fact, every sub-expression in every line of code), would have a set of tests that put it through its paces.
A relatively mechanical process can be put in place in which and existing set of tests are executed to determine the coverage of the target code base. The coverage reports can then feed the creation of more tests. A definition of progress thus becomes crisp. That is, you are making progress if your coverage reports are progressing upwards in terms of lines of code exercised.
Now there are all sorts of subtleties here about what code coverage cannot tell you about your code base and how combinatorial explosions mean that a test suite can never be complete. I do not propose to go in to these issues here. Test coverage is not a panacea but it is a significant contributor to overall quality.
The fact that scoring and improving test coverage can be made relatively mechanical is very interesting I think. I do not see outsourcing of testing to be a viable route forward for the computing arts. To my mind, testing is a core competency. However, measuring and improving test coverage is something I could see being viably outsourced.
Which brings us to an unfortunate fact of life. Test coverage tooling seems to me to be very uneven at the moment. Some programming environments are very well served : Java, C#. Some are quite well served but some more options would not hurt e.g. Python and Ruby. Some are very under-served such as StarBasic/OfficeBasic, shell scripting environments and so on.
Economics will doubtless dictate what happens in the test coverage tools space. If I'm right and "testing the tests" becomes a growth service industry in computing, then the tools to enable it will doubtless follow.