Skip to content
 

The why and how of measuring software quality

My last post was about what makes information retrieval software valuable, and one of the most important cross-cutting  factors in creating valuable software is quality. This is something I want to talk a bit more about because it one of the things that motivates me to do my job well and also frustrates me so much when it is ignored.

Assessing quality may seem trivial – if a product looks good, is pleasant to use and serves its purpose, its a high quality product, right? … Well not necessarily. What about criteria like; maintainability, adaptability and robustness? These are things that are hard to see up front when it comes to software. These are things a user doesn’t directly care about either. But they are things that in the long run will affect users and developers alike and account for the lifespan of a product.

So having some idea of the quality of a code-base is clearly important. The next question is how can you measure the quality of a code-base?

For decades people have tried with varying degrees of success to measure code quality. There are many tools which will give you metrics on anti-patterns implemented in your code, tell you how well teams are performing and give you an idea of how well your code is covered by automated tests. These are great advancements in helping to build good software. The problem is they also give us a false sense that we are on top of things and everything is OK, simply because we have these metrics available, and hey – if we plot them on a graph the esoteric line of quality keeps going up and up over time.

A reality check

It’s my considered opinion, that may come back to haunt me, that if you think you can improve a product’s quality at the same time as mindlessly adding features to a code-base, you’re full of shit. Just because the new ‘whatsit-widget’ in your product has been written alongside some unit-tests doesn’t mean you’re winning any kind of war against technical debt.

How about these for metrics: [many of these you can probably confirm looking at version control history]

  • how many times in the last year did developers at your organisation spend two weeks or more solely re-factoring code?
  • can you compare how long it takes to make modifications to existing components, features or classes?
  • how long does it take for a developer unfamiliar with a particular feature to feel comfortable changing the code behind it?
  • how many elements of the software are ‘owned’, and only ever modified by one developer?
  • can you describe a feature or a class in a few (3 or less) short sentences?

Bear in mind that these are not necessarily linear metrics – a really low number for ‘more than two weeks spent re-factoring’ would indicate not enough time allocated to improving quality, a very high number would indicate technical debt is never being repaid.

The above are just suggestions of the sorts of obvious indicators I think should be looked at when measuring code quality. I am by no means an expert on the subject. At the end of the day, if a software organisation is paying some attention to quality and have a reasonable degree of visibility of the quality their products (both on the surface and underneath) they are helping themselves to build better software.

Leave a Reply