Updates

Important Dates:

  • Nov. 08, 2018:
       RERS at ISoLA'18
  • Oct. 01 – Oct. 14, 2018:
       Solution Submission
  • Aug. 22, 2018:
       Release Parallel LTL
  • Jul. 18, 2018:
       Release Sequential LTL and
       Sequential Reachability
  • Jul. 18, 2018:
       Training benchmarks
       available

The RERS Challenge 2018

Co-located with ISoLA 2018,
Limassol, Cyprus, Nov. 08, 2018

Rigorous Examination of Reactive Systems (RERS)

The RERS Challenge 2018 is the 8th International Challenge on the Rigorous Examination of Reactive Systems and is co-located with ISoLA 2018. RERS is designed to encourage software developers and researchers to apply and combine their tools and approaches in a free-style manner to answer evaluation questions for reachability and LTL formulas on specifically designed benchmarks. The goal of this challenge is to provide a basis for the comparison of verification techniques and available tools.

The main aims of RERS 2018 are to:

  • encourage the combination of usually different research fields for better software verification results
  • provide a comparison foundation based on differently tailored benchmarks that reveals the strengths and weaknesses of specific approaches
  • initiate a discussion for better benchmark generation reaching out across the usual community barriers to provide benchmarks useful for testing and comparing a wide variety of tools

Each benchmark scenario (set of verification tasks, often simply called "problem") of RERS consists of a reactive system in some specification or programming language and a set of properties that participants need to check on that given system. Participants of RERS only need to submit their proposed solutions to these verification tasks. A submission of a tool is not required. What distinguishes RERS from other challenges is that these benchmark scenarios can be approached in a free-style manner: It is highly encouraged to combine any known and newly conceived approaches to software verification. In particular, participants are not constrained to their own tools.

Available Tracks

Three main tracks are available that are evaluated separately:

Additionally, a new experimental track is available: Whereas the Sequential Reachability track concerns the reachability of error labels within a C99 or Java program, the Sequential LTL track asks participants to check whether such a program satisfies certain linear temporal logic (LTL) properties that specify input/output behavior. The Parallel LTL track provides transition systems as a plain specification (GraphViz .dot file), a Petri net (PNML), and Promela code. LTL properties have to be checked on the parallel composition of those transition systems, the marking graph of the corresponding Petri net, or a communication trace within Promela. Within the Promela version, parallel transition systems are represented as processes with buffer-less and therefore synchronous message passing. The Parallel CTL track uses the same systems as the Parallel LTL track, however features three CTL properties that participants can analyze.

Underlying Benchmark Generation

The benchmark scenarios of RERS are automatically synthesized based on formal methods. This allows to generate a new set of benchmark scenarios for each iteration of RERS and thus to minimize the risk of tools overfitting to a certain pool of verification tasks. Due to a property-preserving construction of benchmark scenarios, the solutions to the verification tasks within RERS are known to the organizers and will be published online once the challenge has concluded.

It is the just-mentioned automatic benchmark generation that renders a free-style challenge such as RERS possible: Because verification tasks are newly generated each year, participants only need to submit their proposed solution. This allows participants to freely combine verification techniques without providing an executable tool that is evaluated within resource constraints. Furthermore, the hardness of verification tasks can be scaled along multiple dimensions to test the limits of state-of-the-art tools.

Contact

For any questions and inquiries, please contact us at info@rers-challenge.org.

RERS 2018 Organizers

    Scientific committee: Benchmark generation committee:
    • Sequential benchmarks: Malte Mues, Technische Universität Dortmund, Germany
    • Parallel benchmarks: Marc Jasper, Technische Universität Dortmund, Germany
    Publicity committee:
    • Jeroen Meijer, University of Twente, the Netherlands