EVENT

Event News

Talk by Prof. Emir Demirović and Prof. Anna Lukina from TU Delft

We are pleased to inform you about the upcoming seminar by Prof. Emir Demirović and Prof. Anna Lukina from TU Delft titled:"Transparent AI by Design: Search Algorithms for Supervised Learning,Control Policies, and Combinatorial Certification" "Advancing Safe Autonomy: Neural Certificates, Reusable Guarantees, and Interpretable Policies" Everyone interested is cordially invited to attend!

Talk 1
Title:

Transparent AI by Design: Search Algorithms for Supervised Learning,Control Policies, and Combinatorial Certification

Abstract:

AI methods--such as those used in supervised learning, controller synthesis, and combinatorial optimisation--have demonstrated immense value across many domains. However, their practical adoption is hindered by reliability concerns, particularly when these systems are designed as black boxes. Two key challenges arise for black-box AI: (1) lack of performance guarantees--when AI fails, it is unclear whether the task is infeasible or the underlying algorithm is simply inadequate; and (2) lack of confidence--results may be difficult to interpret or trust. While post-hoc interpretability techniques offer partial remedies, we advocate for a different paradigm: building AI systems that are transparent by design. Rather than explaining opaque decisions after the fact, we synthesise outputs that are intrinsically understandable and verifiable. This shifts the focus from doubting AI to questioning whether we are solving the right problem. We apply this approach across three distinct domains: supervised learning, controller synthesis, and infeasibility certification for combinatorial optimisation problems. Although these tasks involve exponentially large search spaces, recent advances demonstrate that designing for transparency is increasingly practical--often without sacrificing performance--making it a compelling alternative to opaque AI systems. A variant of this talk was presented at ECAI'25 as an invited talk in the Frontiers in AI series.

Speaker:

Emir Demirović, associate professor at Delft University of Technology (TU Delft)

Dr. Emir Demirović is an associate professor of computer science at TU Delft (Netherlands), where he leads the Constraint Solving ("ConSol") research group and directs the Explainable AI in Transportation ("XAIT") lab. He has been recognised with the Early Career Researcher Award 2025 from the Association for Constraint Programming and is an ELLIS Scholar. His research focuses on exploiting structural properties of NP-hard problems to design algorithms that are both theoretically complete and efficient in practice, with a particular emphasis on constraint programming and dynamic programming techniques. His techniques have advanced state-of-the-art solvers in MaxSAT and constraint programming (achieving high rankings in competitions such as the MaxSAT Evaluation and the MiniZinc Challenge), his optimal decision tree methods (machine learning) are among the fastest, and his recent approach to certifying outputs of constraint programming solvers on large-scale problems has been highlighted by Donald Knuth in The Art of Computer Programming (Vol. 4, Fasc. 7) as a promising advancement.

Talk 2
Title:

Advancing Safe Autonomy: Neural Certificates, Reusable Guarantees, and Interpretable Policies

Abstract:

Recent advances in neural certification and reinforcement learning are enabling safer and more adaptable autonomous systems. This talk unites four key contributions: First, I will discuss neural continuous-time supermartingale certificates, which provide probabilistic safety guarantees for continuous-time stochastic systems. Second, I will introduce VeRecycle, a theoretical framework for efficiently reusing probabilistic certificates after system changes, drastically reducing the need for costly re-certification. I will also present a modular approach to reinforcement learning, where formally verified sub-policies are safely composed for end-to-end guarantees. Finally, I will demonstrate how interpetable policies can be synthesized directly from black-box simulations using search and optimizartion. Together, these works push the boundaries of safe, scalable, and adaptive autonomy in uncertain environments.

Speaker:

Anna Lukina, assistant professor at Delft University of Technology (TU Delft)

Dr. Anna Lukina is an Assistant Professor and Delft Technology Fellow, leading a team of researchers at TU Delft, The Netherlands, on trustworthy and interpretable AI systems via combining formal methods and machine learning. In 2023, Dr. Lukina was awarded a personal grant from the Dutch Research Council on Explainable Monitoring. She has been a long-term visiting scholar at Simons-Berkeley. She is co-founder and co-chair of the International Symposium of AI Verification. In 2022, she founded an award-winning Future Female+ Faculty Program on improving the diversity of international professor talent. Dr. Lukina obtained her Ph.D. in computer science from Vienna University of Technology. Her doctoral thesis was focused on control and verification of cyber-physical systems. Before joining TU Delft, she was a postdoc with Prof. Thomas A. Henzinger at ISTA in Austria.

Time/Date:

14:00-16:00 January 9 (Friday), 2026

Place:

Room 1509, NII

Contact:

If you would like to join, please contact by email.
Email :kuroiwa[at]nii.ac.jp

entry7253

SPECIAL