Workshops & Tutorials

WORKSHOPS


Applications of Formal Methods and Digital Twins

Organizers: Stefan Hallerstede, Eduard Kamburjan


FMTea

Organizers: Catherine Dubois, Pierluigi San Pietro


Overture

Organizers: Hugo Daniel Macedo, Ken Pierce


Tutorials


T1 – Hybrid System Falsification: Fundamentals and Advanced Topics (pdf slides, video demo)

Monday, March 6
9:00am – 12:30pm (coffee break 10:30am – 11:00am)

Speakers: Zhenya Zhang, Paolo Arcaini, Ichiro Hasuo

Abstract

Cyber-physical systems (CPS), a.k.a hybrid systems, combine continuous and discrete dynamics. They are used in safety-critical domains, such as transportation, healthcare, etc., and so their quality assurance is of primary importance. However, quality assurance of hybrid systems is challenging, as their formal verification is hard, if not impossible. Falsification is a more feasible technique that instead of pursuing a formal proof of system correctness, tries to find a counterexample to refute the system specification. Specifically, optimization-based falsification adopts search-based techniques that exploit the quantitative semantics of temporal logic (e.g., Signal Temporal Logic (STL)) to guide the search. So far, falsification has made great success in academia and industry, with notable applications from Toyota, Bosch, AirBus, and Volvo. The tutorial will give a comprehensive overview of falsification, including the fundamentals and some advanced topics:

  • Fundamentals and the falsification framework: the motivation for falsification, and background concepts, including STL robust semantics and optimization-based falsification.
  • Advanced topics: search heuristics for improving the search of a falsifying input (solving the exploration vs. exploitation tradeoff); techniques for handling the scale problem of STL robust semantics; techniques for handling input constraints.
  • Use of the falsification tool ForeSee.

About the Speakers

Zhenya Zhang is an assistant professor at Kyushu University, Japan. He obtained his PhD from National Institute of Informatics, the Graduate University for Advanced Studies (SOKENDAI), Japan, in 2020. During his PhD, he was a research assistant at the ERATO MMSD project, and a JSPS research fellow. His research topics include verification and testing of cyber-physical systems.

Paolo Arcaini is project associate professor at the National Institute of Informatics (NII), Tokyo, Japan. He received PhD in Computer Science from University of Milan, Italy, in 2013. Before joining NII, he held an Assistant Professor position at Charles University, Czech Republic. His main research interests are related to search-based testing, fault-based testing, model-based testing, software product lines, and automatic repair.

Ichiro Hasuo is a Professor at National Institute of Informatics (NII), Tokyo, Japan. He is at the same time the Research Director of the JST ERATO “Metamathematics for Systems Design” Project, and the Director of Global Research Center for Systems Design and Mathematics at NII. He received PhD (cum laude) in Computer Science from Radboud University Nijmegen, the Netherlands, in 2008 where he was affiliated with the Security of Systems group and supervised by Prof.Dr. Bart Jacobs. His research interests include: mathematical (logical, algebraic and categorical) structures in software science (esp. formal methods); abstraction and generalization of deductive and automata-theoretic verification techniques; integration of formal methods and testing; and their application to cyber-physical systems and systems with statistical machine learning components.


T2 – Verification of Deep Neural Networks (pdf slides)

Monday, March 6
9:00am – 12:30pm (coffee break 10:30am – 11:00am)

Speaker: Daniel Neider

Abstract

Artificial Intelligence has become ubiquitous in modern life. This “Cambrian explosion” of intelligent systems has been made possible by extraordinary advances in machine learning, especially in training deep neural networks and their ingenious architectures. However, like traditional hardware and software, neural networks often have defects, which are notoriously difficult to detect and correct. Consequently, deploying them in safety-critical settings remains a substantial challenge.

Motivated by the success of formal methods in establishing the reliability of safety-critical hardware and software, numerous formal verification techniques for deep neural networks have emerged recently. These techniques can generally be placed on a spectrum, with deductive methods on the one end and abstraction-based methods on the other. As a guide through the vibrant and rapidly evolving field of neural network verification, this tutorial will

– give an overview of the fundamentals and core concepts of the field;
– discuss prototypical examples of various existing verification approaches; and
– showcase state-of-the-art tools.

In addition, it tries to shed light on two crucial yet often neglected questions in neural network verification: what are relevant specifications for neural networks, and how can one formalize them?

About the Speaker

Daniel Neider is the professor of “Verification and Formal Guarantees of Machine Learning” at TU Dortmund University, Germany, and the Center for Trustworthy Data Science and Security. Prior to his current position, Daniel Neider led the “Safety and Explainability of Learning Systems” group at the Carl von Ossietzky University of Oldenburg, also located in Germany. He received his Ph.D. in Computer Science from RWTH Aachen University (Germany) in 2014, where he developed learning-based methods for software verification. Subsequently, he spent two years as a Postdoc at the University of Illinois at Urbana-Champaign (USA) and the University of California, Los Angeles (USA). Before joining the University of Oldenburg, Daniel headed the “Logic and Learning” group at the Max Planck Institute for Software Systems in Kaiserslautern (Germany). His research interests lie at the intersection of formal methods and machine learning, focusing specifically on the safety and reliability of learning systems.


T3 – Machine Learning Guided Program Synthesis

Monday, March 6
2:00pm – 5:30pm (coffee break 3:30pm – 4:00pm)

Speaker: Nathanaël Fijalkow

Abstract

The dream of automatically synthesising programs from their specifications is a long-term objective of artificial intelligence. For a long time deemed impractical, modern approaches based on machine learning technologies are evidence that program synthesis will soon be a reality. The success story of FlashFill shows the high usability of programming by example, which is one instance of program synthesis: FlashFill assists Microsoft Excel users by finding short programs for automatising laborious tasks performed on spreadsheets. This tutorial focuses on the key methodological principle of using machine learning models as guides, or heuristics, for search algorithms. Given a specification as input, the solution goes through two steps: the first step is to query a machine learning model to get predictions on the target program, and the second step is to use these predictions in a search algorithm. Program synthesis using machine learning is a very active research area with both theoretical and practical developments. In recent years the word “program synthesis” featured in a dozen articles in each of the top machine learning venues (NeurIPS, ICLR, ICML, AI&STATS) and in programming languages venues (POPL, PLDI). The main appeal of this tutorial is that the key ideas of using machine learning for guiding search algorithms go way beyond program synthesis. Indeed this methodology is fast developing in many other areas, and the goal of this tutorial will be to highlight the lessons learned in program synthesis and the key challenges.

About the Speaker

Nathanaël Fijalkow is a CNRS researcher (equivalent to Associate Professor) working in LaBRI, the computer science laboratory in Bordeaux, France. His research interests are centered around automated synthesis, which includes both reactive synthesis (also called controller synthesis) and program synthesis. He was the principal investigator of a 3-year (2019 – 2021) CNRS Momentum research grant called DeepSynth whose goal was to investigate the use of machine learning for program synthesis.


T4 – Reasoning with Quantified Boolean Formulas

Monday, March 6
9:00am – 5:30pm (coffee break 10:30am – 11:00am and 3:30pm – 4:00pm, lunch break 12:30pm – 2:00pm)

Speaker: Adrián Rebola-Pardo

Abstract

Many application problems from artificial intelligence, verification, and formal synthesis can be efficiently encoded as quantified Boolean formulas (QBFs), the extension of propositional logic with quantifiers. This extension makes the QBF decision problem PSPACE-complete (in contrast to SAT which is NP-complete).

Over the last years, much progress has been made concerning the theory and practice of QBF research. In this tutorial, we review the state of the art of QBF technology and show how to perform automated reasoning with QBFs.

The tutorial consists of two mainly independent parts. In the first part, the focus will be set on the encoding of application problems. In particular, we will learn how to efficiently formulate verification, planning and synthesis tasks as QBFs. 

In the second part, we will introduce those technologies that are implemented in recent QBF solvers including efficient preprocessing techniques, QCDCL, and expansion-based solving. Furthermore, we will explain why we can always trust the result of certain QBF solvers and how to finally obtain the solutions for the application problems (e.g., the plans of a planning problem).  

About the Speaker

Adrián Rebola-Pardo is a Postdoc at the Institute for Symbolic AI at Johannes Kepler University (JKU) and the Forsyte research unit at TU Wien in Austria. He obtained his PhD on interference-based proof systems from TU Wien in 2022. His research focuses on the design of certification techniques for verification frameworks and exploiting generalized symmetries in automated theorem proving.