The 20th International Conference on Integrated Formal Methods (iFM 2025) took place on 19–21 November 2025 at Inria Paris, France, with co-located workshops scheduled for 17-18 November 2025. This was the last iFM of its kind. The next edition will be iFS at ETAPS in 2027!
The Best Paper Award was granted to Matthias Grundmann and Hannes Hartenstein for their paper “Security of the Lightning Network: Model Checking a Stepwise Refinement with TLA+”
The Distinguished Reviewer Award was granted to Timothy Bourke
Lidia Losavio, Marco Paganoni and Carlo A. Furia Model-Based Testing of an Intermediate Verifier Using Executable Operational Semantics
Jorge Blázquez Saborido, Manuel Montenegro and Clara Segura Verified Implementation of Associative Containers with Iterators Using Threaded Red-Black Trees
Reiner Hähnle, Cosimo Laneve and Adele Veschetti Formal Verification of Legal Contracts: A Translation-based Approach
Matthias Grundmann and Hannes Hartenstein Security of the Lightning Network: Model Checking a Stepwise Refinement with TLA+
Martin Hána, Nikolai Kosmatov, Virgile Prevosto and Julien Signoles Formal Verification of PKCS#1 Signature Parser using Frama-C
Jean-Christophe Filliatre, Andrei Paskevich and Olivier Danvy When Separation Arithmetic is Enough
Manar Altamimi, Asieh Salehi Fathabadi and Vahid Yazdanpanah Formal Modeling of Trust in AI-Driven Autonomous Delivery Vehicles
Chelsea Edmonds, John Derrick, Brijesh Dongol, Gerhard Schellhorn and Heike Wehrheim Model Checking Buffered Durable Linearizability in CSP
Georgiana Caltais, Andrei Covaci and Hossein Hojjat Concurrency Under Control: Systematic Analysis of SDN Race Hazards
Gabriel Dengler, Sven Apel and Holger Hermanns Automata Learning – Expect Delays!
Konstantin Britikov, Grigory Fedyukovich and Natasha Sharygina CHC-Based Reachability Analysis via Cycle Summarization
David Cortés, Jean Leneutre, Vadim Malvone, James Ortiz and Pierre-Yves Schobbens Extending Timed Automata with Clock Derivatives
Michele Alberti, François Bobot, Julien Girard-Satabin, Alban Grastien, Aymeric Varasse and Zakaria Chihani The CAISAR Platform: Extending the Reach of Machine Learning Specification and Verification
Giuseppe De Palma, Saverio Giallorenzo, Jacopo Mauro, Matteo Trentin and Gianluigi Zavattaro Reachability Analysis of Function-as-a-Service Scheduling Policies
Ion Chirica and Mário Pereira Unfolding Iterators: Specification and Verification of Higher-Order Iterators, in OCaml
Sophie Rain, Anja Petković Komel, Michael Rawson and Laura Kovács Game Modeling of Blockchain Protocols
Carlos Isasa, Noah Abou El Wafa, Cláudio Gomes, Peter Gorm Larsen and André Platzer Safe Temperature Regulation: Formally Verified and Real-World Validated
Angelo Ferrando, Peng Lu and Vadim Malvone Auto-Generating Visual Editors for Formal Logics with Blockly
Neda Saeedloei and Feliks Kluzniak Distributed Timed Scenarios
Eshita Zaman, Christopher Johannsen, Andrew S Miner, Gianfranco Ciardo and Samik Basu CTL Model Checking Partially Specified Systems
Jonathan Hellwig, Lukas Schäfer, Long Qian, Matthias Althoff and André Platzer From Zonotopes to Proof Certificates: A Formal Pipeline for Safe Control Envelopes
Douglas Fraser, Alice Miller, Marco Cook and Dimitrios Pezaros Online Model Checking for Anomaly Detection in Industrial Control Systems
Fletcher Chapin, Ankur Varma, Samuel Akinwande, Meagan Mauter and Sriram Sankaranarayanan Using Bayesian Inference and Flowpipe Construction to Bound Predictions of Biogas Production at Wastewater Treatment Plants
Gidon Ernst and Grigory Fedyukovich Quick Theory Exploration for Algebraic Data Types via Program Transformations
Reproducibility of experiments is crucial to foster an atmosphere of open, reusable and trustworthy research. To improve and reward reproducibility and to give more visibility and credit to the effort of tool developers in our community, authors of accepted papers will be invited to submit possible artifacts associated with their paper for evaluation, and based on the level of reproducibility they will be awarded one or more badges.
The goals of the artifact evaluation are manifold. We want to encourage authors to provide more substantial evidence to their papers and to reward authors who aim for reproducibility of their results, and therefore create artifacts. Also, we want to give more visibility and credit to the effort of tool developers in our community. Furthermore, we want to simplify the independent replication of results presented in the paper and to ease future comparisons with existing approaches.
In the last decades, we have witnessed a proliferation of approaches that integrate several modelling, verification and simulation techniques, facilitating more versatile and efficient analysis of software-intensive systems. These approaches provide powerful support for the analysis of different functional and non-functional properties of the systems, complex interaction of components of different nature as well as validation of diverse aspects of system behaviour. The iFM conference series is a forum for discussing recent research advances in the development of integrated approaches to formal modelling and analysis. The conference covers all aspects of the design of integrated techniques, including language design, verification and validation, automated tool support and the use of such techniques in software engineering practice. To credit the effort of tool developers, we use EAPLS artifact badging.