SkillAgentSearch skills...

Covrig

Framework for the Analysis of Code, Test, and Coverage Evolution in Real Software

Install / Use

/learn @srg-imperial/Covrig
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Covrig Run tests

What is Covrig?

Covrig is a flexible infrastructure that can be used to run each version of a system in isolation to collect static and dynamic software metrics (code coverage, lines of code), originally developed by Paul Marinescu and Petr Hosek at Imperial College London.

Changelog (April 2023)

  • Upgraded to python3 (incl. deps)
  • Added more examples for containers
  • Added support for differential coverage calculation
  • Rewrote and extended postprocessing graph generation
  • Wrote basic tests for analytics.py and Github CI

Building

To build the project, you will need:

  • Python 3.8 or higher
  • Docker (see https://docs.docker.com/engine/install/ubuntu/)
  • Python packages: docker, fabric 2.7.1, and matplotlib 3.7.0
  • LCOV 2.0 or higher needed for differential coverage (https://github.com/linux-test-project/lcov)

NOTE: This project was developed on Linux (Ubuntu 20). It may work on other platforms, but this is not guaranteed since we do use shell commands when processing the data - the commands run on spawned Docker containers will of course be fine though.

Covrig works by spawning a series of VMs to run revisions of software. To connect to these VMs, we automatically SSH to them. To do this, generate an SSH keypair with ssh-keygen. Keep the private key in your ~/.ssh directory and replace the id_rsa.pub files in each containers/<repo> with your public key that you generated in ~/.ssh for each repo you would like to generate data for.

To build a repo's container from a Dockerfile, run this from the root of the project:

docker build -t <image_name>:<tag> -f containers/<repo>/Dockerfile containers/<repo>

For further analytics (e.g. some graphs), you may need local copies of the repos you are testing to be present in a /repos directory in the root of the project.


Usage

Gathering Data

python3 analytics.py <benchmark>

Base benchmarks consist of lighttpd, redis, memcached, zeromq, binutils and git circa 2013.

Newly added benchmarks include apr, curl and vim.

The format for these containers is relatively simple. The Dockerfile contains the instructions for building the container.

The full options are

usage: python3 analytics.py [-h] [--offline] [--resume] [--limit LIMIT] [--output OUTPUT] [--endatcommit COMMIT]
                           program revisions

positional arguments:
  program          program to analyse
  revisions        number of revisions to process

optional arguments:
  -h, --help            show this help message and exit
  --offline             process the revisions reusing previous coverage information
  --resume              resume processing from the last revision found in data file
                        (e.g. data/<program>/<program>.csv)
  --limit LIMIT         limit to n number of revisions (use the positional argument revisions if not sure)
  --output OUTPUT       output file name
  --image IMAGE         use a particular Docker image to analyze
  --endatcommit COMMIT  end processing at commit. Useful for debugging
                        (e.g. python3 analytics.py --endatcommit a1b2c3d redis 1 can help debug issues with a certain commit)
                        Determining what commit to end at if you know the commit to start at can be found using the script `utils/commit_range.sh`          

examples:
  python3 analytics.py redis 100
  python3 analytics.py --offline redis 100
  python3 analytics.py --image redis:latest --endatcommit 299b8f7 redis 1

Scenario: Nothing works! I need an image!

Solution: The images aren't currently yet autogenerated on the running the scripts, so before running you may need to generate the image from the Dockerfiles in containers/. For example, to generate the image for Redis, run docker build -t redis:latest -f containers/redis/Dockerfile containers/redis. You can then specify the image (useful when a repo requires multiple images, e.g. lighttpd2) as follows: python3 analytics.py --image redis:latest or python3 analytics.py --image lighttpd2:16.


Scenario: python3 analytics.py redis was interrupted (bug in the code, power failure, etc.)

Solution: python3 analytics.py --resume redis. For accurate latent patch coverage info, also run python3 analytics.py --offline redis (Note: will not work with --endatcommit option)


Scenario: python3 analytics.py zeromq 300 executed correctly but you realised that you need to analyse 500 revisions

Solution: python3 analytics.py --limit 200 zeromq 500 analyses the previous 200 revisions and appends them to the csv output. postprocessing/regen.sh data/Zeromq/Zeromq.csv repos/zeromq/ will regenerate the output file, putting all the lines in order (you need repos/zeromq to be a valid zeromq git repostory). For accurate latent patch coverage info, also run python3 analytics.py --offline zeromq 500


Scenario: I want to analyse a particular revision or set of revisions.

Solution (1): python3 analytics.py --endatcommit a1b2c3d redis 1 will analyse the revision a1b2c3d for redis.

Solution (2): python3 analytics.py --endatcommit a1b2c3d redis 2 will analyse the revision before and then then revision a1b2c3d for redis.


Scenario: The data is collected too slowly! How to help speed it up?

Solution: Use the script utils/run_analytics_parallel.sh <repo> <num_commits> <num_processes> <image> [end_commit] Example: utils/run_analytics_parallel.sh redis 100 4 redis:latest will run 4 processes in parallel, each processing 25 commits.


Scenario: Experiments were executed. How to get meaningful data?

(Old) Solution: Run postprocessing/makeartefacts.sh. Graphs are placed in graphs/, LaTeX defines are placed in latex/

(New) Solution: Run python3 postprocessing/gen_graphs.py <data/dir>. Graphs are placed in graphs/. Example: python3 postprocessing/gen_graphs.py data/Redis/ or python3 postprocessing/gen_graphs.py --dir data to generate graphs for all benchmarks. Ideal file structure is data/Redis/Redis.csv data/Binutils/Binutils.csv etc.


<!-- Legacy Instructions are listed below --> <!-- Scenario: How to get non-determinism data? Solution: Run the same benchmark multiple times ``` for I in 1 2 3 4 5; do python3 analytics.py --output Redis$I redis ; done ``` To get the results, run ``` postprocessing/nondet.sh data/Redis1/Redis.csv data/Redis1 data/Redis2 data/Redis3 data/Redis4 data/Redis5 ``` --- Scenario: I have a list of revisions. How do I get more interesting information about them? Solution: Run ``` ./postprocessing/fixcoverage-multiple.sh repos/memcached/ bugs/bugs-memcached.simple data/Memcached/ data/Memcached/Memcached.csv ``` The first argument is a local clone of the target git repository, the second argument is a file with the list of revisions which fix bugs (one per line), the third argument is a folder which contains the results of the analytics.py script and the optional fourth argument is the analytics .csv output. The output looks like ``` Looked at 46 fixes (1 unhandled): 179 lines covered, 68 lines not covered 4 fixes did not change/add code, 28 fixes were fully covered only tests/only code/tests and code 0/18/23 ``` This can be used to get details about new tests/code. For example, running this on a list of bug fixing revisions can show how well fixes are tested and whether a regression test is added along with the revision. Running this on a list of bug introducing revisions may show low coverage. --- Scenario: I have a list of revisions. How do I get more interesting information about the code from the previous revision? Solution: As before, but use the `postprocessing/faultcoverage-multiple.sh` script. This can be used to analyse buggy code coverage. Running this on a list of bug fixing revisions is intuitively similar to running the previous script on a list of revisions introducing the respective bugs. -->

Differential Coverage

To get pure differential coverage information, run utils/diffcov.sh. Example: utils/diffcov.sh apr remotedata/apr/coverage/ 886b908 8fb7fa4 A quicker script if your file structure is correct is utils/diffcov_runner.sh, which will also convert the data into CSVs and places them in the relevant directory alongside the original data (e.g. in the data/<repo> directory). These can then be graphed - see below.


Generating Graphs

As above, we can generate all the graphs using the gen_graphs.py script.

For example, we can run python3 postprocessing/gen_graphs.py <data/dir>. Graphs are placed in graphs/. Example: python3 postprocessing/gen_graphs.py data/Redis/ for a single repo or python3 postprocessing/gen_graphs.py --dir data to generate graphs for all benchmarks. (note this requires files to be ) Ideal file structure is data/Redis/Redis.csv data/Binutils/Binutils.csv etc.

If differential coverage data has been generated as above, run with the optional --diffcov argument to generate graphs for differential data. Example: python3 postprocessing/gen_graphs.py --diffcov --dir remotedata


Generating Tables

Similar to graphs, we can generate the relevant tables using the get_stats.py script.

For example, we can run python3 postprocessing/get_stats.py <data/dir>. Graphs are placed in graphs/. Example: python3 postprocessing/get_stats.py data/Redis/ for a single repo or python3 postprocessing/get_stats.py --dir data to generate graphs for all bench

View on GitHub
GitHub Stars8
CategoryDevelopment
Updated6mo ago
Forks2

Languages

Python

Security Score

62/100

Audited on Sep 30, 2025

No findings