Results
The RERS Challenge 2017 has ended with the formal announcement of the winners at the RERS Workshop 2017, co-located with ISSTA and SPIN 2017.
On this page we present the 7 teams that participated in this year's challenge and list the corresponding results.
In order to provide a more fine-grained view of the results, we also display ranking results per category for those teams who participated in the ranking.
For the competition however, only the results per track (Sequential Reachability, Sequential LTL, Parallel LTL) were considered. For more details on the evaluation, please refer to the challenge rules.
List of participants
- LMU Munich
Karlheinz Friedberger
LMU Munich, Germany
Tools: CPAchecker (developer) - Nimble Research
Gerard Holzmann
Nimble Research, USA
Tools: Swarm (developer), Spin (developer) - Twente
Jeroen Meijer and Jaco van de Pol
University of Twente, The Netherlands
Tools: LTSmin (developer), LearnLib - Lübeck
Anton Pirogov
University of Lübeck, Germany
Tools: flat-checker (developer) - LLNL
Markus Schordan, Marc Jasper, and Maximilian Fecke
Lawrence Livermore National Laboratory, USA
Tools: CodeThorn (developer) - Radboud/Luxembourg
Christian Hammerschmidt2, Joshua Moerman1, and Rick Smetsers1
1: Radboud University Nijmegen, The Netherlands
2: University of Luxembourg, Luxembourg
Tools: Learnlib, test suite based on adaptive distinguishing sequences (developer), AFL, FlexFringe (developer) - VSL Delaware
Yihao Yan
VSL, University of Delaware, USA
Tools: Tool based on GMC (developer)
Achievements
Category | LMU Munich | Nimble Research | Twente | Lübeck | LLNL | Radboud/ Luxembourg |
VSL Delaware |
Reachability: Plain | |||||||
Reachability: Arithmetic | |||||||
Reachability: Data Structures | |||||||
LTL: Plain | |||||||
LTL: Arithmetic | |||||||
LTL: Data Structures | |||||||
Parallel LTL: Concurrency Ladder |
Ranking evaluation
The following tables list the results of the RERS 2017 ranking evaluation. Drop-down lists below the tables allow to view ranking results for each category individually. Note that these category-specific scores did not matter for the ranking and are provided as additional information, only the scores for each track were considered. Note that a team's track score may be different than the sum of all corresponding track categories due to the exponential penalty for wrong results.
For detailed information about the solved properties you can download a .zip archive that contains the normalized answers of those who participated in the ranking as well as the official solutions as .csv files. The solutions to this year's sequential training problems are included: These training problems are enumerated as 91-96 instead of 1-6 in the solution files.
Solutions with counterexamples for the sequential tracks can be found on the Sequential LTL and Sequential Reachability pages.
Sequential Reachability: Track ranking
Rank | Name | Score | Wrong answers |
1 | Twente | 402 | 0 |
2 | LLNL | 339 | 0 |
3 | Radboud/Luxemb. | 245 | 0 |
4 | LMU Munich | -42 | 8 |
Sequential LTL: Track ranking
Rank | Name | Score | Wrong answers |
1 | LLNL | 726 | 0 |
2 | Twente | 540 | 0 |
3 | Radboud/Luxemb. | -268434584 | 28 |
Parallel LTL: Track ranking
Rank | Name | Score | Wrong answers |
1 | Nimble Research | 264 | 2 |
The following drop-down lists allow you to view the ranking evaluation for individual categories. Please click "view!" once selected.