Continuous and contextual software assessment

Software testing used to be an activity for lesser programmers. It was tedious, it was repetitive, it was everything a respectable programmer would hate. It took two guys, a couple of classes and a nice xUnit naming scheme to turn the game around. Now, you know a respectable programmer by the tests he writes.

The battle was not easy. It took a decade of agility to make tests gain their rightful place. And it’s not even a close chapter as we can still find skeptics pointing their finger to the costs of writing tests, and how this takes away from overall effort that could otherwise be spent on active programming.

Even if at first writing tests does involve extra costs, we know they save us great pain later on, because we know that as we write more code we unwillingly break assumptions from the existing code. In other words, we favor the possibility of not being able to control all the details of the system, and thus we ensure that the computer will be able to do it for us. And, the funny thing is that it turns out that thinking of tests makes us better programmers, too.

But, what exactly does make unit tests so useful? First, the ability to run them continuously makes for a fantastic feedback machine. And second, because they tend to be extremely contextual, they provide the kind of feedback that can lead to immediate actions.

While testing is important, it concerns but one aspect of a software system, namely the expected functionality. However, there are many other aspects that require similar attention. The internal quality of the structure of the system is as important. The interaction between the different technologies used is important. The traceability of features is as important. The conformance to external guidelines and constraints is as important. The performance is as important. The security issues can be as important. Even the cleanness of the build system can be an important issue.

How important are these? For example, various reports claim that up to 50% of the development effort is actually spent on understanding code.

Given that 50% is quite large, you would expect a significant amount of tools to be geared towards this side. In practice however, developers mostly rely on code reading. Granted, modern IDEs do provide some help, but software systems are large and complex, and thus relying on reading them just does not scale when we want to understand them as a whole. For example, a person that reads one line in two seconds would require approximately one month of work to read a quarter of a million lines of code. And this is just for reading the code.

So, do developers actually read the entire code? Not really. They typically limit the reading to a part of the system, while the overview is left for the drawing board from the architect’s office. Thus, most decisions tend to be local, while those that are global are mostly based on inaccurate information.

To rectify the situation we need tools that help us crunch the vastness of data. However, not any tool does the job. Many tools exist out there to deal with various aspects of data and software analysis, but most of them take the oracle way: they offer some predefined analyses that provide answers to standard questions. That is great when you have a standard question. However, it turns out that most of the time our questions are not quite standard. In these situations, regardless how smart the analysis is, it is of little use for the analyst.

We need customized tools that provide contextual feedback. One reason regression tests are so useful is that they provide contextual feedback. It would be great to create assessment tools just like we now create tests. The only trouble is that the cost of building such tools is too high. Or at least it is perceived to be, especially if you have to write them from scratch. A middle ground can be found by having a platform upon which these tools can be built. Drawing from the testing parallel, we would need the correspondent of an xUnit-like platform.

Moose is such a platform that was conceived exactly to ease the building of complete and customized assessment tools. While Moose tackles the problem of assessment globally, depending on the goal you can have other solutions, too. For example, for querying you can use text-based ones like grep or more complex ones that deal with AST information like PMD. Using platforms like these makes tool building practical.

Once you control such an infrastructure, building a tool costs almost zero. That is especially if you compare the building costs with the costs saved by using the tool.

However, one cost still remains and it should not be taken lightly. Just like in the case of unit testing, the greatest challenge is to shift your state of mind from what you do, to how you do what you do.

At first it might be clumsy, but as you get used with continuous and contextual feedback you you will get hooked.

Just give it a try. Leap. I promise that you won’t want to go back.

Posted by Tudor Girba at 26 April 2010, 11:59 pm with tags assessment link

Comments

As you said, tests are crucial in any serious software development. However, something that really strikes me, is the lack of decent tools for managing and producing tests. The Smalltalk programming environment offers great browsers, inspectors and debugger to produce business code. I can navigate through my code in many different ways, browsing implementors, senders, along the class hierarchy, ... But what can it do to help me navigate through the tests? To refactor them? To identify and remove duplication of executed scenario? To order my tests, i.e., discerning my unit tests from my integration tests? To measure the code coverage? (and no, getting the % of covered code is not enough) Well... SUnit and all the integration it can have in the programming environment are useful. We can do much better however. Smalltalk showed to the world how to better develop applications. It has the power to shine by showing how to produce better tests.

Posted by Alexandre Bergel at 27 April 2010, 5:08 am link