| CARVIEW |
qcomp.org
qcomp.org is the home of the Quantitative Verification Benchmark Set (QVBS), the Comparison of Tools for the Analysis of Quantitative Formal Models (QComp), and the Workshop on Reproducibility and Replication of Research Results (RRRR).
Quantitative Verification Benchmark Set
The Quantitative Verification Benchmark Set is a collection of probabilistic models to serve as a benchmark set for the benefit of algorithm and tool developers as well as the foundation of QComp.
QComp: Quantitative Verification Competition
The Comparison of Tools for the Analysis of Quantitative Formal Models (QComp) is the friendly competition among verification and analysis tools for quantitative formal models. Drawing its benchmarks from the Quantitative Verification Benchmark Set, it compares the performance, versatility, and usability of the participating tools.
- QComp 2020 was part of ISoLA 2020/2021 in Rhodes, Greece.
- QComp 2019 was part of the TACAS 2019 TOOLympics in Prague, Czech Republic.
Read more about the most recent QComp...
Reproducibility and Replication of Research Results
The Workshop on Reproducibility and Replication of Research Results provides a forum to present novel approaches to foster reproducibility of research results, and replication studies of existing work, in the broad area of formal methods research.
- RRRR 2025 will be part of ETAPS 2025 in Hamilton, Canada.
- RRRR 2024 was part of ETAPS 2024 in Luxembourg.
- RRRR 2023 was part of ETAPS 2023 in Paris, France.
- RRRR 2022 was part of ETAPS 2022 in Munich, Germany.
Read more about the upcoming RRRR workshop...
The source code for this website is mostly CC-BY 4.0-licensed and available on GitHub.