A couple of months ago I was invited to give a talk for the Swiss IT Intelligence Community on the topic of metrics in software engineering.
Metrics are fantastic tools. They compress the given data into numbers, and we are formatted since we are very young to deal and play with such numbers. "My kid is 3 and he knows how to count to 10," is a rather common line in a parent’s repertoire.
Metrics can be powerful analysis tools, but we should not forget that metrics are but a tool and that they do not guarantee understanding. Furthermore, metrics represent one analysis technique that only reveals numbers. No serious assessment should stop at metrics.
First, numbers should come as late as possible in the assessment process, and when they do appear they should always represent properties of entities of interest. For example, we should not just play with the number of methods, but rather have a collection of methods that represents the size and only use the number representation of this data at the very end, when we need to present a number. In this way, behind every number we could have the data that produces that number, and this in turn would enable us to drill for further details. If you want to see this principle in action, you can give the inFusion/inCode tools a try. They are doing since a long time, and I believe this should get to be a widely adopted feature in metrics tools.
Second, especially when we deal with complex and connected pieces of data like in a software system, we have to go beyond metrics and use other techniques, such as querying, data mining or visualization. This is the lesson we learnt while building the Moose platform.
Third, we should always go beyond tools and interpret the data we get from them. Only through this step can we get to the promised understanding. This is what humane assessment argues for.
Here are the handouts that went with the slides I used.