My thoughts on case study “On the Relation Between Unit Testing and Code Quality”
In august 2017 a case study was published called “On the Relation Between Unit Testing and Code Quality”. There are widely differing opinions on unit testing. Other’s will not code without them and others don’t think they are worth the time and money. So what did the study find?
The Case Study
The study looked at multinational networking and telecommunications equipment company. They took real life projects and looked at the data the companies could provide them at the time. This included test coverage percentage per file, and also bugs tracked down to specific lines of code in each file. Bugs per file was calculated from version control commits tagged with bug fix tags, as was company policy.
They only included files that had either 0% test coverage or 100% test coverage. In total 680 files of code. The bugs per file were measured by the number of modifications that was tagged as a bug fix in the version control system.
Companies in the study did not have specific code coverage targets. Instead tests were written where they were judged to be important. Most likely the files with 0% coverage were quite simple and non-critical, and the files with 100% coverage were complicated and more critical. Unfortunately, the study doesn’t specify the general content of these files. If a Model-View-Controller pattern was used, the files with 0% coverage could be views. This would mean they mainly consist of simple display logic. Code that is not very error prone.
The code that was 100% covered with unit tests could have been the controller part, and consist of algorithms for all we know. This is code that is very hard to get right in the first place, hence it makes sense to write 100% unit test coverage.
Unfortunately at this point we do not know the contents of these files, so this is just guess work. They even wrote in the study that more research needs to be done before we can draw any conclusions.
This issue is addressed in the study. As a solution to this problem the held three meetings with 10–14 software designers. They were not informed of this study. From the meetings they concluded that the amount of tests written dependent on “individual software developers’ own conviction of how much unit tests that should be written”. Design architects also sometimes made recommendations to write tests for particular code areas. In the study they state that “We did, however, not find any indication that the software designers’ choices were dependent on factors like complexity, size, or perception of error-proneness.”
I would take this statement with a grain of salt. I suppose it is possible that some people just like to write unit tests, and others don’t, but I find it hard to believe that the complexity or error-proneness did not affect the developers choice to write unit tests. As a developer, I don’t want to be the guy that broke the app. It’s also possible that more complex tasks gravitated toward developers that write tests. With some fairly complex functions that every serious application has, I’d find it absurd not to write unit tests. It’s very hard to make sure everything works without unit tests.
My interpretation on the study
It could be interpreted from the study that unit tests do work. When the developers didn’t write tests they had 2.9% more bugs. Not a lot, but I would assume this was code simple code, like constant files. I would also argue that the super complex code they thought was necessary to write 100% test coverage was critical and complicated. The fact that it had the same amount or less bugs than normal code would mean that unit tests do reduce bugs. With tests the super complicated code had less bugs than the simple code. However I don’t find the study method very reliable either way, so as usual, more studies are needed.
When was the last time you wrote unit tests? Did it help? Am I jumping into conclusions?