Updates

  • Feb. 01, 2019:
       Sequential and industrial
       challenge problems released
  • Feb. 01, 2019:
       Parallel challenge problems    will be released asap
  • Jan. 25, 2019:
       Training problems
       now available

Important Dates:

  • Apr. 06 – 7, 2019:
       RERS at TACAS'19
       (TOOLympics)
  • Mar. 22, 2019:
       Solution submission
  • Feb. 01, 2019:
       Release of challenge
       problems
  • Jan. 25, 2019:
       Training problems
       available

The RERS Challenge 2019

Part of TOOLympics at TACAS'19,
Prague, Czech Republic, April 6 – 7, 2019

Rigorous Examination of Reactive Systems (RERS)

The RERS Challenge 2019 is the 9th International Challenge on the Rigorous Examination of Reactive Systems and is co-located with TACAS 2019. RERS is designed to encourage software developers and researchers to apply and combine their tools and approaches in a free-style manner to answer evaluation questions for reachability and LTL formulas on specifically designed benchmarks. The goal of this challenge is to provide a basis for the comparison of verification techniques and available tools.

The main aims of RERS 2019 are:

  • Encourage the combination of usually different research fields for better software verification results.
  • Provide a comparison foundation based on differently tailored benchmarks that reveals the strengths and weaknesses of specific approaches.
  • Initiate a discussion for better benchmark generation reaching out across the usual community barriers to provide benchmarks useful for testing and comparing a wide variety of tools.

Each benchmark scenario (set of verification tasks, often simply called "problem") of RERS consists of a reactive system in some specification or programming language and a set of properties that participants need to check on that given system. Participants of RERS only need to submit their proposed solutions to these verification tasks. A submission of a tool is not required. What distinguishes RERS from other challenges is that these benchmark scenarios can be approached in a free-style manner: It is highly encouraged to combine any known and newly conceived approaches to software verification. In particular, participants are not constrained to their own tools.

Available Tracks

Seven main tracks are available that are evaluated separately:

The new Industrial LTL/CTL/Reachability tracks contain programs that are based on real-world models of controller software provided by ASML. More information can be found here.

RERS features 2019 prizes and other rewards as described on this page. For the industrial tracks of RERS 2019, ASML and ESI sponsor up to 3000 Euros in prize money.

Whereas the reachability tracks concerns the reachability of error labels within a C99 or Java program, the sequential and industrial LTL tracks asks participants to check whether such a program satisfies certain linear temporal logic (LTL) properties that specify input/output behavior.

The parallel tracks provides transition systems as a plain specification (GraphViz .dot file), a Petri net (PNML), and Promela code. LTL/CTL properties have to be checked on the parallel composition of those transition systems, the marking graph of the corresponding Petri net, or the communication behavior within Promela. Within the Promela version, parallel transition systems are represented as processes with buffer-less and therefore synchronous message passing.

Underlying Benchmark Generation

The benchmark scenarios of RERS are automatically generated based on formal methods. In some cases like the new industrial tracks, this generation expands an existing real-world model. Our automatic generation allows to create a new set of benchmark scenarios for each iteration of RERS and thus to minimize the risk of tools overfitting to a certain pool of verification tasks. Due to a property-preserving construction of benchmark scenarios, the solutions to the verification tasks within RERS are known to the organizers and will be published online once the challenge has concluded.

It is the just-mentioned automatic benchmark generation that renders a free-style challenge such as RERS possible: Because verification tasks are newly generated each year, participants only need to submit their proposed solution. This allows participants to freely combine verification techniques without providing an executable tool that is evaluated within resource constraints. Furthermore, the hardness of verification tasks can be scaled along multiple dimensions to test the limits of state-of-the-art tools.

Contact

For any questions and inquiries, please contact us at info@rers-challenge.org.

RERS 2019 Organizers

    Scientific committee: Benchmark generation committee:
    • Sequential benchmarks: Malte Mues, Technische Universität Dortmund, Germany
    • Parallel benchmarks: Marc Jasper, Technische Universität Dortmund, Germany
    • Industrial benchmarks:
      • Dennis Hendriks, ESI (TNO), Netherlands
      • Harco Kuppens, Radboud University, Netherlands
    Publicity committee:
    • Jeroen Meijer, University of Twente, the Netherlands