Shedding Light on Explainability of Decisions Stemming from Additive Decision Models
Wassila Ouerdane – CentraleSupélec, Paris-Saclay University
Abstract:
Many decision models rely on an additive representation of preferences. Recommendations derived from such additive models are sometimes considered self-evident. This presentation aims to highlight that the additive model is not simple enough to be regarded as straightforward, transparent, or self-explanatory.
To this end, the talk will provide an overview of a range of explainability tools that we have proposed for explaining recommendations obtained from an additive model. Our primary concern was to develop principle-based approaches and cognitively bounded models of explanation for end-users. By principle-based, we mean that each explanation is anchored in a set of well-understood properties of the underlying decision model. By cognitively bounded, we mean that the statements composing an explanation are constrained to remain easy to grasp for the receiver (i.e., the decision maker). We will end the presentation by discussing
some open challenges and future research directions in this area.
Short Bio:

Wassila Ouerdane is a Full Professor of Computer Science at CentraleSupélec, Paris-Saclay University, and a member of the Mathematics and Informatics Laboratory. She obtained her PhD in Computer Science from Université Paris-Dauphine in 2009 and her Habilitation à Diriger des Recherches (HDR) from Université Paris-Saclay in 2022. Her research lies at the intersection of knowledge representation and reasoning and machine learning. Her current research focuses on the interpretability and explainability of artificial intelligence algorithms, with a significant part of her work dedicated to explainability in the context of multiple-criteria decision aiding.
Wassila Ouerdane has supervised over a dozen PhD students in collaboration with academic institutions and industrial partners, on topics bridging data-driven AI and symbolic AI within a user-centered framework. She serves as a reviewer for both AI and operations research conferences (AAMAS, AAAI, IJCAI, KR) and journals (Operations Research, European Journal of Operational Research, OMEGA). She is also co-leader of the national working group “Explainability and Trust” within the CNRS RADIA (Reasoning, Learning, and Decision in Artificial Intelligence) research group.
e-mail: wassila.ouerdane@centralesupelec.fr
web page: https://wassilaouerdane.github.io
Uncertainty in preference modelling:
A tale of two views
We have probabilities and sets, why should we bother?
Destercke Sebastien – Heudiasyc Laboratory, Compiègne
Abstract:
Uncertainty in (single) user preference modelling and in multi-criteria decision aiding is traditionally handled by sets or probabilities, with those two approaches usually considered at the exclusion of the other. Each of them has proved to be valid modelling choices, each enjoying many nice mathematical properties as well as efficient algorithmic procedure. This talk will not question their usefulness, but will rather consider why one could wish to extend or unify these approaches using richer uncertainty models. In particular, I will make my best to argue that beyond the mere unifying power of such richer models (which is nice to have theoretically but not always practically useful), the use of these uncertainty models can be useful in a number of situations, notably to handle inconsistent preferential information or provide defeasible explanations to the user.
Short Bio:

Destercke Sebastien graduated in 2004 as an engineer from the Faculté Polytechnique de Mons in Belgium. In 2008, he earned a Ph.D. degree in computer science from Université Paul Sabatier, in Toulouse (France). He briefly worked in the French agricultural research centre working for international development, before becoming a CNRS researcher in the Heudiasyc Laboratory, in Compiègne. His main research interests are in the fields of decision making and uncertainty reasoning (modeling, propagating, learning) with imprecise probabilistic models, with MCDA and preference modelling being one field where he applied those. He is currently the deputy head of the Heudiasyc AI team, and the holder of the UTC SAFE AI chair.
Beyond Rewards: The Challenge of AI-Alignment with Preference-based Learning
Aadirupa Saha – Department of Computer Science, University of Illinois Chicago
Abstract:
As AI systems increasingly influence high-stakes decisions, aligning them with diverse human values is critical. This talk explores preference-based learning frameworks that enable AI to learn from comparative feedback rather than scalar rewards. I present theoretical foundations spanning multi-armed bandits to contextual reinforcement learning, with provable safety learning guarantees. Drawing from recent work on RLHF, contextual dueling bandits, and adaptive experimental design, I demonstrate how preference-based approaches can democratize AI through personalized systems that respect heterogeneous user preferences while maintaining rigorous performance bounds. The talk addresses key challenges in sample complexity, robustness under preference heterogeneity, and scaling to complex domains like large language models and other autonomous systems.
Short Bio:

Aadirupa Saha has been an Assistant Professor in the Department of Computer Science at the University of Illinois Chicago (UIC) since Fall 2025. She is a member of the UIC CS Theory group, as well as IDEAL Institute. Prior to this, she was a Research Scientist at Apple MLR, working on Machine Learning theory. She completed her postdoctoral research at Microsoft Research (NYC) and earned her PhD from the Indian Institute of Science (IISc), Bangalore.
Saha’s primary research focuses on AI alignment through Reinforcement Learning with Human Feedback (RLHF), with applications in language models, assistive robotics, autonomous systems, and personalized AI. At a high level, her work aims to develop robust and scalable AI models for designing prediction systems under uncertain and partial feedback.