Luminary Talk

Trustworthy AI

Jeannette M. Wing, Columbia University, Data Science Institute, US

Invited Talk video: https://www.youtube.com/watch?v=V-wUb4NSOgs&ab_channel=FormalMethodsEurope

Abstract: Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, and healthcare, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted.  How can we deliver on the promise of the benefits of AI but address scenarios that have life-critical consequences for people and society?  In short, how can we achieve trustworthy AI?

Under the umbrella of trustworthy computing, employing formal methods for ensuring trust properties such as reliability and security has led to scalable success.  Just as for trustworthy computing, formal methods could be an effective approach for building trust in AI-based systems.  However, we would need to extend the set of properties to include fairness, robustness, and interpretability, etc.; and to develop new verification techniques to handle new kinds of artifacts, e.g., data distributions and machine-learned models. This talk poses a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.

Bio: Jeannette M. Wing is the Executive Vice President for Research and Professor of Computer Science at Columbia University. She joined Columbia in 2017 as the inaugural Avanessians Director of the Data Science Institute. From 2013 to 2017, she was a Corporate Vice President of Microsoft Research. She is Adjunct Professor of Computer Science at Carnegie Mellon where she twice served as the Head of the Computer Science Department and had been on the faculty since 1985. From 2007-2010 she was the Assistant Director of the Computer and Information Science and Engineering Directorate at the National Science Foundation. She received her S.B., S.M., and Ph.D. degrees in Computer Science, all from the Massachusetts Institute of Technology.

Professor Wing’s current research focus is on trustworthy AI. Her general research interests are in the areas of trustworthy computing, security and privacy, specification and verification, concurrent and distributed systems, programming languages, and software engineering. She is known for her work on linearizability, behavioral subtyping, attack graphs, and privacy-compliance checkers. Her 2006 seminal essay, titled “Computational Thinking,” is credited with helping to establish the centrality of computer science to problem-solving in fields where previously it had not been embraced.

She is currently a member of the American Academy for Arts and Sciences Board of Directors and Council; the New York State Commission on Artificial Intelligence, Robotics, and Automation; and the Advisory Board for the Association for Women in Mathematics. She has been chair and/or a member of many other academic, government, industry, and professional society advisory boards. She was or is on the editorial board of twelve journals, including the Journal of the ACM, the Communications of the ACM, and the Harvard Data Science Review. She received the CRA Distinguished Service Award in 2011 and the ACM Distinguished Service Award in 2014. She is a Fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science, the Association for Computing Machinery (ACM), and the Institute of Electrical and Electronic Engineers (IEEE). She holds an honorary doctorate of technology from Linkoping University, Sweden.


KEYNOTE TALKS


Symbolic Computation in Automated Program Reasoning

Laura Kovács, Vienna University of Technology, Institute of Logic and Computation, Austria

Invited Talk video: https://www.youtube.com/watch?v=5v8VeT2XmMc&ab_channel=FormalMethodsEurope

Abstract: We describe applications of symbolic computation towards automating the formal analysis of while-programs implementing polynomial arithmetic.

We combine methods from static analysis, symbolic summation and computer algebra to derive polynomial loop invariants, yielding a finite representation of all polynomial equations that are valid before and after each loop execution. While deriving polynomial invariants is in general undecidable, we identify classes of loops for which we automatically can solve the problem of invariant synthesis.

We further generalize our work to the analysis of probabilistic program loops. Doing so, we compute higher-order statistical moments over (random) program variables, inferring this way quantitative invariants of probabilistic program loops. Our results yield computer-aided solutions in support of formal software verification, compiler optimization, and probabilistic reasoning.

Bio: Laura Kovacs is a full professor of computer science at the TU Wien, leading the automated program reasoning (APRe) group of the Formal Methods in Systems Engineering division.
Her research focuses on the design and development of new theories, technologies, and tools for program analysis, with a particular focus on automated assertion generation, symbolic summation, computer algebra, and automated theorem proving. She is the co-developer of the Vampire theorem prover and a Wallenberg Academy Fellow of Sweden. Her research has also been awarded with a ERC Starting Grant 2014, an ERC Proof of Concept Grant 2018, an ERC Consolidator Grant 2020, and an Amazon Research Award 2020.


The Next Big Thing

Harald Rueß, fortiss GmbH, Germany

Invited Talk video: https://www.youtube.com/watch?v=70F9GpPZnbU&ab_channel=FormalMethodsEurope

Abstract: A new generation of increasingly autonomous and self-evolving software systems is about to emerge. When deploying and embodying these systems into our very societal fabric we face various engineering challenges, as it is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with social values, and design verifiably safe and reliable machine interaction.

In this talk, I am arguing that traditional software engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in uncertain and largely unpredictable real-world operating contexts.

I am also discussing corresponding engineering challenges — namely, robust AI techniques, cognitive architectures, uncertainty quantification, trustworthy self-integration, and continual analysis and assurance — together with the central role of formal methods for designing embodied systems in which we can put our trust.

Bio: Harald Ruess is director at the fortiss research institute of the Free State of Bavaria. He studied computer science in San Diego and mathematics in Ulm, where he obtained a PhD on formal meta-programming.
He has been a long-time staff researcher at SRI International, he worked in the automotive and the aerospace industry on quality, safety, and security engineering, and he also consulted technology start-ups. His main contributions to the field of formal methods are on hardware verification, verification of cryptographic protocols, and the development of novel decision procedures, for which he obtained the CAV award in 2021.


Intelligent and Dependable Decision-Making Under Uncertainty

Nils Jansen, Radboud University Nijmegen, Institute for Computing and Information Sciences, The Netherlands

Invited Talk video: https://www.youtube.com/watch?v=sIumPJDTe44&ab_channel=FormalMethodsEurope

Abstract: This talk highlights our vision of foundational and application-driven research toward safety and dependability in artificial intelligence (AI). 
We take a broad stance on AI that combines formal methods, machine learning, and control theory.
As part of this research line, we study problems inspired by autonomous systems, planning in robotics, and industrial applications.

We consider reinforcement learning (RL) as a specific machine learning technique for decision-making under uncertainty.
RL generally learns to behave optimally via trial and error. 
Consequently, and despite its massive success in the past years, RL lacks mechanisms to ensure safe and correct behavior. 
Formal methods, in particular formal verification, is a research area that provides formal guarantees of a system’s correctness and safety based on rigorous methods and precise specifications. 
Yet, fundamental challenges have obstructed the effective application of verification to reinforcement learning.
Our main objective is to devise novel, data-driven verification methods that tightly integrate with RL. In particular, we develop techniques that address real-world challenges to the safety of AI systems in general: Scalability, expressiveness, and robustness against the uncertainty that occurs when operating in the real world. The overall goal is to advance the real-world deployment of reinforcement learning.

Bio: Nils Jansen is an associate professor with the Institute for Computing and Information Science (ICIS) at Radboud University, Nijmegen, The Netherlands. He received his Ph.D. with distinction from RWTH Aachen University, Germany, in 2015. Before Radboud University, he was a research associate at the University of Texas at Austin. His research is on intelligent decision-making under uncertainty, focusing on formal reasoning about the safety and dependability of artificial intelligence (AI). He holds several grants in academic and industrial settings, including an ERC starting grant with the title: Data-Driven Verification and Learning Under Uncertainty (DEUCE).