| CARVIEW |
Explainable Computer Vision: Quo Vadis?
Workshop at the International Conference on Computer Vision (ICCV) 2025
Honolulu, Hawai'i, USA
October 19, 2025 · Full Day · Ballroom A
Virtual Attendees: Zoom link and Q/A
About
Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs exceed at predictive performance, they are often too complex to be understood by humans, rendering them to be referred to as “closed-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to understand DNNs better, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not consistently defined and is highly dependent on the end user and task, leading to ill-defined research questions and no standardized evaluation practices.
Rapidly increasing model sizes and the rise of large-scale foundation models have brought forth fresh challenges to the field such as handling scale, maintaining model performance, performing appropriate evaluations, complying with regulatory requirements, and fundamentally rethinking what one wants from an explanation.
The goals of this workshop are thus two-fold:
-
Discussion and dissemination of ideas at the cutting-edge of XAI research
-
A critical introspection on the challenges faced by the community and the way forward (“Quo Vadis?”)
Program
| Time | Program | Venue |
|---|---|---|
| 8:45–9:00 | Opening Remarks | Ballroom A |
| 9:00–9:30 |
Thomas Fel From Methods to Phenomenology: Rethinking Deep Vision Interpretability |
|
| 9:30–10:00 |
David Bau Does Modern AI Contain Concepts? |
|
| 10:00–10:15 | Coffee Break | Exhibit Hall II |
| 10:15–10:45 |
Sharon Li Explainability Meets Reliability in Large Vision-Language Models |
Ballroom A |
| 10:45–11:15 | Elevator Pitches of Accepted Papers | |
| 11:15–14:00 | Poster Session | Exhibit Hall II |
| 12:15–13:00 | ↳ Lunch Social Event (Meeting at Poster 1) | |
| 14:00–14:30 | Open-Mic Opinions | Ballroom A |
| 14:30–15:30 | Panel Discussion: Stefan Roth, Thomas Fel, David Bau, Hila Chefer, Deepti Ghadiyaram, René Vidal | |
| 15:30–15:45 | Coffee Break | Exhibit Hall II |
| 15:45–16:15 |
Hila Chefer Interpretability as the Generative Tool We Didn’t Know We Needed |
Ballroom A |
| 16:15–16:45 |
Deepti Ghadiyaram Inside the mind of advanced generative models |
|
| 16:45–17:15 |
Viorica Patraucean Understanding our models: intrinsic and relational perspectives |
|
| 17:15–17:20 | Closing Remarks |
Invited Speakers
David Bau
Northeastern University
Deepti Ghadiyaram
Boston University
Hila Chefer
Black Forest Labs
Sharon Li
UW Madison
Thomas Fel
Harvard University
Viorica Patraucean
Google DeepMind
René Vidal
University of Pennsylvania
Accepted
Proceedings Track
Do VLMs Have Bad Eyes? Diagnosing Compositional Failures via Mechanistic Interpretability
Explaining Object Detection Through Difference Map
GFR-CAM: Gram-Schmidt Feature Reduction for Hierarchical Class Activation Maps
Interpretable Open-Vocabulary Referring Object Detection with Reverse Contrast Attention
Accepted
Early Stage Track
Causal Interpretation of Sparse Autoencoder Features in Vision
CoCo-Bot: Energy-based Composable Concept Bottlenecks for Interpretable Generative Models
Patch-wise Retrieval: An Interpretable Instance-Level Image Matching
The Myth of Robust Classes: How Shielding Skews Perceived Stability
Toward a Principled Theory of XAI via Spectral Analysis
Unmasking the functionality of early layers in VLMs
Vision language models fail to translate detailed visual features into words
Accepted
Nectar Track
As large as it gets: Learning infinitely large Filters via Neural Implicit Functions in the Fourier Domain
Controlling Neural Collapse Enhances Out-of-Distribution Detection and Transfer Learning
DCBM: Data-Efficient Visual Concept Bottleneck Models
Gradient-based Visual Explanation for Transformer-based CLIP
Granular Concept Circuits: Toward a Fine-Grained Circuit Discovery for Concept Representations
Hallucinatory Image Tokens: A Training-free EAZY Approach on Detecting and Mitigating Object Hallucinations in LVLMs
Keep the Faith: Faithful Explanations in Convolutional Neural Networks for Case-Based Reasoning
LR0.FM: Low-Res Benchmark and Improving Robustness for Zero-Shot Classification in Foundation Models
LucidPPN: Unambiguous Prototypical Parts Network for User-centric Interpretable Computer Vision
TAB: Transformer Attention Bottlenecks enable User Intervention and Debugging in Vision-Language Models
Towards Safer and Understandable Driver Intention Prediction
Vision language models are blind
What Variables Affect Out-of-Distribution Generalization in Pretrained Models?
Reviewers
We thank the reviewers for their efforts!
| Ada Görgün | Aditya Chinchure | Akash Guna R.T | Alina Elena Baia |
| Angelos Nalmpantis | Dmitry Kangin | Elisa Nguyen | Eyad Alshami |
| Georgii Mikriukov | Guillaume Jeanneret | Hubert Baniecki | Ivaxi Sheth |
| Jawad Tayyub | Jiageng Zhu | Karim Haroun | Leander Girrbach |
| Manxi Lin | Mateusz Pach | Meghal Dani | Mingqi Jiang |
| Nhi Pham | Pegah Khayatan | Sadaf Gulshad | Sayed Mohammad Vakilzadeh Hatefi | Simon Roschmann | Susu Sun | Teresa Dorszewski |
Contact
Call for Papers
The eXCV workshop aims to advance and critically examine the current landscape in the field of XAI for computer vision. To this end, we invite papers covering all topics within XAI for computer vision, including but not limited to:
- Attribution maps
- Evaluating XAI methods
- Intrinsically explainable models
- Language as an explanation for vision models
- Counterfactual explanations
- Causality in XAI for vision models
- Mechanistic interpretability
- XAI beyond classification (e.g., segmentation or other disciplines of computer vision)
- Concept discovery
- Understanding Foundation models’ representations
- Feature visualizations
- New forms of explanations
Since the aim of the workshop is not only to present new XAI methods but also to question current practices, we also invite papers that present interesting and detailed negative results and papers that show the limitations of today’s XAI methods.
Submission Instructions
The workshop has four submission tracks:
Proceedings Tracks
Papers in these tracks will be published in the ICCV 2025 Workshop Proceedings, and must be up to 8 pages in length excluding references and supplementary material. Papers submitted to the Proceedings Track should follow the ICCV 2025 Submission Policies and the Author Guidelines. Each accepted paper in the Proceedings Track needs to be covered by a Author Registration, and one registration can cover up to three papers. Please see the ICCV 2025 Registration page for the most up to date details.
- Full Papers: We welcome papers presenting novel and original XAI work, within the broad scope described above.
- Position Papers: We invite thought-provoking papers that articulate bold positions, propose new directions or present challenges for the field of XAI. We expect accepted papers to spark discussions rather than presenting research work, and will select papers based on their potential to stimulate the debate during the workshop.
Non-Proceedings Tracks
Papers in these tracks will not be published in the ICCV 2025 Workshop Proceedings.
- Early Stage Track: We welcome in this track submissions describing preliminary work, ongoing projects, or novel ideas that are in early stages of development. We particularly encourage contributions from researchers from underrepresented communities and/or interdisciplinary teams. Papers should be up to 4 pages excluding references, and should follow the ICCV submission template.
- Nectar Track: We invite papers that have been previously published at a leading international conference on computer vision or machine learning in 2024 or 2025 (e.g., ECCV, ICCV, CVPR, NeurIPS, ICLR, ICML, AAAI). The aim of the Nectar Track is to increase the visibility of exciting XAI work and to give researchers an opportunity to connect with the XAI community. The submission should be a single PDF containing the already published paper (not anonymized and in the formatting of the original venue).
For all tracks, accepted papers will be presented in-person in the poster session of the workshop. At least one author for each accepted paper should plan to attend the workshop to present a poster.
Important Dates
Proceedings Tracks:
- Paper submission deadline: June 26, 2025 (23:59 AoE)
- Paper decision notification: July 10, 2025
- Camera Ready deadline: August 18, 2025
Non-Proceedings Tracks:
- Paper submission deadline: August 15, 2025 (23:59 AoE)
- Paper decision notification: August 27, 2025
Rolling deadline for Nectar track only: Submissions will be accepted as long as poster space is available. Please submit via this form. Note that this form may close at any time without prior notice.
Submission Sites
- Proceedings Tracks: https://openreview.net/group?id=thecvf.com/ICCV/2025/Workshop/eXCV_Proceedings_Track
- Non-Proceedings Tracks: https://openreview.net/group?id=thecvf.com/ICCV/2025/Workshop/eXCV_Non-Proceedings_Track
Call for Open-Mic Opinions
Keeping with the spirit of the workshop’s theme “Quo Vadis”, where we aim to discuss the state of the field of XAI, its challenges, and its opportunities, we wish to provide a platform for the broader community to participate in this discussion and share their views. We plan to do this by holding short 5 minute talks (in-person only) during the workshop.
If you would like to participate, please submit a short proposal (at most half a page, and can even be just bullet points) on the topic you would like to speak and the position you would like to take. The proposal does not have to be very detailed but should contain the key messages you would like to convey in your talk. The general idea is to present a position on a matter of interest to the community (similar to position papers). Using references to support your position are encouraged. You may use your own work to support your position, but the talk should have a broader focus and should ideally not be limited to your own research. “Unpopular” positions that may challenge norms and assumptions taken by the field are welcome. We welcome everyone to speak, including but not limited to XAI researchers (students included), practitioners who use XAI methods, stakeholders who use explanations, and members of the broader vision, ML, and AI community.
Submission Link and Timeline
- Submission Link: https://forms.gle/fw9PdvKNyWSqnTwe7 (if you are unable to submit via Google Forms, please email us.)
- Deadline: September 29, 2025 AoE
- Decisions: October 1, 2025 AoE