#Measuring code coverage in crystal with kcov

Published on February 24, 2019 · 615 words · about 3 min reading time

Crystal, the programming language, does not yet provide a built in way of measuring the effectiveness of your test suite. So by running crystal spec you pretty much only have binary insight into the suite: it's passing or it's not. This lead me to build crytic in the first place. But while mutation coverage is a great tool to investigate the test suite, plain old code coverage is usually quicker to obtain and easier to glance at.

The players

Looking at the issue above mentions two possible libraries: anykeyh/crystal-coverage and SimonKagstrom/kcov. I had been running crystal-coverage for a while and used it as an inspiration quite a few times, after all it's a crystal library that basically injects coverage markers into the crystal source code before compiling and running the tests. Due to it's proof of concept nature however it has a few limitations and leads to varying results and coverage numbers. Not quite ready for production use yet, but I'm excited for it's future or possible integration of a similar concept into the crystal compiler itself!

kcov on the other hand has nothing to do with crystal. It can work on any binary that has debug information (in DWARF) available. This is great because it doesn't really impose any requirements onto our crystal program.

Generating coverage data with kcov

Usage of kcov is very straightforward. The macOS binary has to be built manually, but the instructions are easy enough and work as advertise. So once you have a kcov to run, you can pass it your command to execute the program, more specifically in this case, the tests. You also need to pass a folder to write the coverage report to.

kcov ./coverage run_tests for example invokes a program run_tests, uses its debug information to track code coverage, and creates the report inside the coverage folder.

The typical way to run a crystal program's tests is usually crystal spec, which means you don't have a binary for kcov to analyze. The way I solve this is by creating an entrypoint program to run the tests:

  1. Create a crystal file requiring all specs: echo "require "./spec/**"" > run_tests.cr
  2. Compile that file to a binary crystal build run_tests.cr -D skip-integration

The second part allows you to pass compile time flags as well, as I do to exclude integration tests from the code coverage analysis. I find that coverage generated by integration tests is low-value and more incidental that not. That's why I usually only consider unit-tests for code coverage.

Clean and include path

Two options that kcov provides were important to my workflow: --clean instructs kcov to always get a fresh analysis. If omitted, results from all runs are accumulated, which is not what I want.

Running the command without --include-path=./src will provide a coverage report that also prints the entirety of the crystal standard library (at least the parts that are required by the program). Usually the report is only interesting for the program that you are working on.

Putting it all together

So far we have everything together to create code coverage analysis for your crystal program's test suite. Awesome! Go ahead, look at it and see what parts of your code are uncovered by the tests 🍿. As in so many cases, the analysis only gets useful if continually integrated into the development workflow. Luckily for us, kcov produces output that can be understood by various tools down the chain. I use codecov.io to upload the report and have it easily browseable online. See this blogs report or crytics report. The script I use on CI is as follows:

#!/usr/bin/env bash

echo "require "./spec/**"" > run_tests.cr && \
crystal build run_tests.cr -D skip-integration && \
kcov --clean --include-path=$(pwd)/src $(pwd)/coverage ./run_tests && \
bash <(curl -s https://codecov.io/bash) -s $(pwd)/coverage