Hypro
HyPro: A C++ state set representation library for the analysis of hybrid systems
Install / Use
/learn @hypro/HyproREADME

HyPro - A C++ library for the representation of state sets for the reachability analysis of hybrid systems
<!---master: [](https://travis-ci.com/hypro/hypro) alpha: [](https://travis-ci.com/hypro/hypro)-->The C++ library HyPro, which is a result of the project HyPro funded by the german research council, provides easy to use implementations of state set representations for the reachability analysis of hybrid systems via flowpipe-construction. It provides implementations of the most commonly used state set representations such as boxes, convex polytopes, support functions or zonotopes. All implementations conform to the same general interface, which allows the user to easily switch between representations (even during analysis).
Additionally, commonly used data-structures and algorithms required during the analysis are provided. The tool features a parser for Flow*-syntax and implementations of basic reachability analysis methods based on flowpipe-construction.
On the official project page you can find a collection of benchmarks as well.
Installation
The latest stable version can be obtained from the master-branch. The alpha-branch holds a development version - please check the CI-status to see whether this version is stable. Apart from compiling HyPro, a Docker image is available from DockerHub (both branches).
<a name="dependencies"></a> Dependencies
To be able to compile HyPro, the following dependencies must be met:
- cmake
- a c++-compiler (g++/clang++)
- CArL
- a Java runtime environment (e.g. openjdk-re)
- uuid-dev
- iputils-ping
- libz3-dev
Compiling the library
After installing the dependencies obtaining the source code from the git-repository, HyPro can be configured via cmake and built afterwards. We recommend an out-of-source build:
$ mkdir build && cd build
$ cmake ..
$ make resources
$ make hypro
This minimal build will not compile tests. To compile tests as well and run them, you can use the target allTests, e.g.;
$ make allTests
$ make test
HyPro registers itself to cmake which means that any further cmake-project which depends on HyPro does not necessarily require a system-installation of HyPro. Instead, it suffices to call find_package(hypro) from cmake to locate the local build.
Documentation
Currently we provide an <a href="https://ths.pages.rwth-aachen.de/hybrid/hypro-main/" target="_blank">API documentation</a> created with Doxygen as well as a Pdf manual.
Examples & Usage
HyPro comes with some examples shipped, which may give you a hint on how to use the library. All examples are listed in
the examples folder; all source files in this folder prefixed with example_ are automatically assigned a corresponding
target, i.e., the file example_box.cpp can be compiled via the target example_box.
Running HyPro in Docker
Hypro uses a pre-build docker container which contains some of the dependencies (for example the CArL library) in order to make the docker build step faster. We can build this with the docker file Dockerfile.carl. This docker image is already compiled and pushed to <a href="https://hub.docker.com/repository/docker/hyprodockeruser/carl/general" target="_blank">docker hub</a>. We use this file also for speeding up the gitlab pipeline.
In order to build and run HyPro inside a docker container you need to run the following commands:
$ docker build -t hypro .
$ docker run -it <hashOfImage> bash
The first command will build the docker image. With the second command we can step into the docker container and we can run hypro inside the docker container. In the second command you need the hash of the newly built docker image. You can get this by listing out all the images which are available on your system and then looking for one which is called hypro.
Case Studies
The library comes with a selection of case studies which can be used for reproduction of experiments or as starting points for interested users to use HyPro. In the following we shortly describe the content and how to run specific case studies.
Neural Network Verification
HyPro includes four benchmarks that interested users can use to test our reachability analysis algorithm for neural networks with piece-wise linear activation functions. The four benchmarks are: ACAS Xu, drone hovering, thermostat controller, and sonar classifier. In the following section, we list some example commands for running verification on each of these benchmarks. We provide the neural networks in .nnet format and the safety properties in a custom text format. To execute the verification with the corresponding parameters, we offer a convenience script called nnBenchmarkVerification.sh, which is located in the root directory of HyPro. To run the script, we assume that the user's current directory is the build folder.
Verification with CEGAR
Besides the previously established exact and over-approximate analysis methods, we implement and publish a counterexample-guided abstraction refinement (CEGAR) approach, providing a <em>complete</em> verification method for feedforward neural networks (FNNs). Using CEGAR one can efficiently verify FNNs with the guarantee to certify safety or to prove unsafety by providing a counter-example input.
To execute the reachability analysis with CEGAR one need to enter cegar as the corresponding analysis method for the nnBenchmarkVerification.sh script. Furthermore, the user also need to specify a heuristic configuration that sets the heuristics used for CEGAR. The set heuristics can be specified as a string of 4 characters:
- Back-tracing of counter-examples:
- 'u': Use the UNSAT Core based back-tracing.
- 'p': Use the predicate based back-tracing.
- 'l': Use the predicate based back-tracing supported by extra LP checks.
- Preserved origins:
- 'e': Refine all paths at the given depth of a previously preserved origin.
- 'p': Refine paths based on previously preserved origin.
- Index for predicate based back-tracing:
- 'f': Refine at the first index (closest to root).
- 'l': Refine at the last index (closest to leaves).
- 'a': Refine at all indices.
- Pruning via safe histories:
- 't': Save safe histories and prune paths in the reachability tree.
- 'f': Do not save safe histories and do not try to prune the tree.
As an example, one can analyze the drones network AC2 proprerty #2 (i.e., AC2_02) using CEGAR with heuristic setting peft (i.e., predicate-based tracing, depth-wise preserved origins, refining the first index, and enabling the safe histories) as follows:
$ make example_NN_CEGAR
$ ../nnBenchmarkVerification.sh drones cegar AC2.nnet prop_AC2_02.in safe_AC2_02.in peft
or with upaf (i.e., UNSAT Core based tracing, with path-wise preserved origins, refining at all indices and without using safe histories):
$ make example_NN_CEGAR
$ ../nnBenchmarkVerification.sh drones cegar AC2.nnet prop_AC2_02.in safe_AC2_02.in upaf
One can execute CEGAR using other combination of heuristics by specifying the corresponding 4 character string.
ACAS Xu
The famous ACAS Xu benchmark consists of 45 neural networks, each following the naming convention ACASXU_experimental_v2a_x_y.nnet, where $1 \leq x \leq 5$ and $1 \leq y \leq 9$. All of the networks have 5 inputs, 5 outputs, 6 hidden layers, each hidden layer with 50 neurons and ReLU activation. The benchmark comes with 10 safety properties, which are described in section VI of the Appendix of this paper.
To verify a specific property-network combination, you should first compile the corresponding binary file. Then, assuming that the current directory is the build folder, running the over-approximate method on network ACASXU_experimental_v2a_2_4 with safety property 4 can be done as follows:
$ make example_ACASbenchmark_verification
$ ../nnBenchmarkVerification.sh acasxu overapprox ACASXU_experimental_v2a_2_4.nnet poly_prop4_input.in poly_prop4_safe.in
Unbounded ACAS Xu Verification
Our implementation supports the analysis of bounded and unbounded input starsets. We provide an unbounded variant of the ACAS Xu property $\phi_4$, that we refer to as $\overline{\phi}_4$. One can test this unbounded benchmark by specifying the unbounded flag as an input to the provided script. For example, to verify the unbounded $\overline{\phi}_4$, on the network ACASXU_experimental_v2a_3_5, use the following commands:
$ make example_ACASbenchmark_verification
$ ../nnBenchmarkVerification.sh acasxu exact ACASXU_experimental_v2a_3_5.nnet poly_prop4_input.in poly_prop4_safe.in unbounded
Generalized Activation Function Analysis
Another extension of our tool is that it can verify feed-forward neural networks with general piece-wise linear activation functions. To evaluate our implementation, we reutilize the ACAS Xu benchmark and interpret each ReLU activation function as a general piece-wise linear function. To run experiments with the generalized analysis, please use the following commands, specifying the generalized flag:
$ make example_ACASbenchmark_verification
$ ../nnBenchmarkVerification.sh acasxu exact ACASXU_experimental_v2a_2_4.nnet poly_prop
