| CARVIEW |
The Ninth FEVER Workshop
Call For Papers
With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to reason about in a wide range of domains. However, in order to do so, we need to ensure that we trust the accuracy of the sources of information that we use. Handling false information coming from unreliable sources has become the focus of a lot of recent research and media coverage. In an effort to jointly address these problems, we are organizing the 9th instalment of the Fact Extraction and VERification (FEVER) workshop (https://fever.ai/) to promote research in this area. The workshop will be co-located with EACL 2026 and will be held either in Rabat, Morocco with the option of online attendance.
New Shared Task: In this year’s workshop, we will organise a new shared task focused on AVerImaTeC: A Dataset for Automatic Verification of Image-Text Claims with Evidence from the Web. It will consist of 1,297 real-world image-text claims that are fact checked using evidence from the web. Each claim is annotated with question-answer pairs supported by evidence (both images and text) available online, as well as textual justifications explaining how the evidence combines to produce a verdict. Given the multimodal nature of the task, both questions and answers may involve images. For each claim, systems must return a label (Supported, Refuted, Not Enough Evidence, Conflicting Evidence/Cherry-picking) and appropriate evidence. The evidence must be retrieved from the document and image collection provided by the organisers, see our shared task page. Please submit your papers here.
The timeline for it is as follows:
- Training/dev data release: September 29, 2025
- Test data release: November 28, 2025
- Shared task system closes: December 2, 2025
- Shared task submission due: December 19, 2025
We invite long and short papers on all topics related to fact extraction and verification, including:
- Information Extraction
- Semantic Parsing
- Knowledge Base Population
- Natural Language Inference
- Textual Entailment Recognition
- Argumentation Mining
- Machine Reading and Comprehension
- Claim Validation/Fact checking
- Question Answering
- Information Retrieval and Seeking
- Theorem Proving
- Stance detection
- Adversarial learning
- Computational journalism
- Descriptions of systems for the FEVER, FEVER 2.0, FEVEROUS, AVERITEC and AVERITEC 2.0 Shared Tasks
Long/short papers should consist of eight/four pages of original content plus unlimited pages for bibliography. Submissions must be in PDF format, anonymized for review, and follow the EACL 2026 conference submission guidelines, using the LaTeX style files, Word templates or the Overleaf template from the official EACL website.
Each long paper submission consists of up to eight (8) pages of content, plus unlimited pages for references; final versions of long papers will be given one additional page (up to nine pages with unlimited pages for references) so that reviewers’ comments can be taken into account.
Each short paper submission consists of up to four (4) pages of content, plus unlimited pages for references; final versions of short papers will be given one additional page (up to five pages in the proceedings and unlimited pages for references) so that reviewers’ comments can be taken into account.
The review will be double-blind (two-way anonymized review). Please do not include any self-identifying information in the submission. Papers can be submitted as non-archival, so that their content can be reused for other venues. Please put a footnote stating "NON-ARCHIVAL submission" on the first page. Non-archival papers will follow the same submission guidelines, and, if accepted, will be linked from the FEVER website but not from the EACL proceedings. Previously published work can also be submitted in this manner, with the additional requirement to state on the first page the original publication. In this case, the paper does not need to be anonymized.
Limitations Section (mandatory): Following the EACL format, we require all papers to have a discussion of limitations, in a section titled “Limitations”. The section should appear at the end of the paper, after the conclusions section and before the references, and will not count towards the page limit.
Ethics Statement (optional, but highly recommended): We also highly recommend including an ethics statement. We allow extra space (hence not counting towards page limits) for a section at the end of the paper for a broader impact statement and other discussions of ethics.
Moreover, please review and abide by the ACL Ethics Policy as outlined below: "Authors are required to honor the ethical code set out in the ACL Code of Ethics. The consideration of the ethical impact of our research, use of data, and potential applications of our work has always been an important consideration, and as artificial intelligence is becoming more mainstream, these issues are increasingly pertinent. We ask that all authors read the code, and ensure that their work is conformant to this code. Where a paper may raise ethical issues, we ask that you include in the paper an explicit discussion of these issues, which will be taken into account in the review process. We reserve the right to reject papers on ethical grounds, where the authors are judged to have operated counter to the code of ethics, or have inadequately addressed legitimate ethical concerns with their work."
Important dates
- Submission deadline: Oct 6, 2025
- Direct paper submission deadline: Dec 19, 2025
- Commitment deadline (for pre-reviewed papers): Jan 7, 2026
- Notification: Jan 23, 2026
- Camera-ready deadline: Feb 3, 2026
- Workshop: March 28 or 29, 2026 (Co-located with EACL 2026)
All deadlines are 11.59 pm UTC-12h ("anywhere on Earth").
Workshop Organising Committee
Mubashara Akhtar
King's College London
Mubashara Akhtar
Mubashara Akhtar is a PhD student at King's College London, working on multimodal fact-checking and supervised by Oana Cocarascu and Elena Simperl. She co-organized a WiML workshop co-located with NeurIPS 2020 and served as reviewer for the WebConf, AAAI, and ACL-affiliated conferences.
Rami Aly
University of Cambridge
Rami Aly
Rami Aly is a PhD student at Cambridge University, supervised by Andreas Vlachos and working on automated fact checking. He has previously co-organized a shared task on the hierarchical classification of book blurbs at GermEval 2019, a workshop co-located with KONVENS 2019. The dataset used for this shared task was created as part of his bachelor's thesis.
Rui Cao
University of Cambridge
Rui Cao
Rui Cao is a postdoctoral candidate at the University of Cambridge, working with Prof. Andreas Vlachos on multimodal fact-checking. She received her PhD in Singapore Management University, supervised by Prof. Jing Jiang. Her research interest is vision-language understanding, with a specific focus on online misbehavior and misinformation understanding with image-text and visual question answering.
Yulong Chen
University of Cambridge
Yulong Chen
Yulong Chen is a Postdoctoral Research Associate at the University of Cambridge, working with Professor Andreas Vlachos on textual claim verification. Yulong received his D.Phil. degree from Westlake University and Zhejiang University, advised by Professor Yue Zhang, working on text summarization. Before that, he obtained his M.Sc degree from the University of Edinburgh, where he was advised by Professor Bonnie Webber and worked on event relation extraction, and his B.Eng. degree from Wuhan University.
Oana Cocarascu
King's College London
Oana Cocarascu
Dr Oana Cocarascu is a Lecturer in Artificial Intelligence at King's College London. Her work is on applied research, specifically on how AI can be deployed to support real world applications. She received her PhD from Imperial College London, where she worked at the intersection of natural language processing and machine learning for argument mining. She also worked on the automatic extraction of argumentation frameworks from data to provide user-centric explanations in a variety of settings. Application areas span recommender systems, explainable classifiers, as well as safe and trusted AI systems.
Zhenyun Deng
University of Cambridge
Zhenyun Deng
Zhenyun Deng is a Postdoctoral Research Associate at the University of Cambridge, working with Andreas Vlachos on automated fact checking. His research focuses on the interpretability of models for NLP, including for fact verification, and question answering. Zhenyun received his PhD from the University of Auckland in 2023. He has served as a AC for ACL, EMNLP, NAACL.
Zifeng Ding
University of Cambridge
Zifeng Ding
Zifeng Ding is a Postdoctoral Researcher at the University of Cambridge, working with Andreas Vlachos in the Cambridge NLIP Group. His research interests include but are not limited to agentic AI, temporal reasoning with LLMs, multimodal fact-checking, and LLM hallucination detection/mitigation. Zifeng received his PhD from Ludwig Maximilian University of Munich in 2025.
Zhijiang Guo
HKUST (GZ)
Zhijiang Guo
Dr. Zhijiang Guo is an Assistant Professor at HKUST (GZ) and an Affiliated Assistant Professor at HKUST. Previously, he was a Postdoctoral Researcher at the University of Cambridge. He earned his Ph.D. from SUTD, with a visiting student stint at the University of Edinburgh. He has published in top conferences and journals like ICML, NeurIPS, ICLR, COLM, TACL, ACL, EMNLP, and NAACL. He has served as an Area Chair for NeurIPS, ICLR, *CL conferences, a Senior Program Committee member for AAAI and IJCAI, and an Action Editor for the ACL Rolling Review.
Arpit Mittal
Meta
Arpit Mittal
Dr Arpit Mittal is the Machine Learning lead for Facebook's team focusing on preventing behaviors on the platform leading to physical or emotional harm to the users. This includes problems involving Child Safety, Bullying and Harassment, Health Misinformation and Extreme Personal harm (Suicide and Self Injury, Non-Consensual Intimate Imagery). Before joining Facebook, he was a Senior Machine Learning Scientist at Amazon Alexa and worked on projects involving knowledge extraction, information retrieval and question answering. He received his PhD from the University of Oxford in Computer Vision and Machine Learning. He has been part of the organising committees for various major Natural Language Processing and Machine Learning conferences. Apart from the FEVER workshops, Arpit is also part of the founding committee of the Truth and Trust Online Conference.
Michael Schlichtkrull
Queen Mary University of London
Michael Schlichtkrull
Michael Schlichtkrull is a lecturer at Queen Mary University of London, and an affiliated lecturer at the University of Cambridge. His focus is on the modelling of structured data for NLP tasks, including for relational link prediction, fact verification, and question answering. Michael received his PhD from the University of Amsterdam in 2021. During the last few years he visited at the University of Edinburgh, where he worked on graph neural networks for relational link prediction and question answering, as well as interpretability for graph neural networks.
James Thorne
KAIST AI
James Thorne
James is Assistant Professor at KAIST AI Graduate School, South Korea, working on large-scale and knowledge-intensive natural language understanding. James completed his PhD at the University of Cambridge where he developed models and methods for automated fact verification and correction. James has also spent time at Amazon Alexa and Facebook AI Integrity and has served at the FEVER workshop since 2018.
Chenxi Whitehouse
Meta
Chenxi Whitehouse
Chenxi Whitehouse is a research scientist at Meta, focusing on Fundamental AI Research for LLMs. She previously worked as a postdoctoral researcher with Prof. Andreas Vlachos at the University of Cambridge and as an applied research scientist at Amazon AGI. Chenxi holds a PhD in knowledge-grounded NLP from City, University of London, and degrees in Electrical Engineering from the University of Erlangen-Nürnberg and University College London, and Information Engineering from Xi'an Jiaotong University.
Andreas Vlachos
University of Cambridge
Andreas Vlachos
Andreas is a Senior Lecturer at the University of Cambridge, working on the intersection of Natural Language Processing and Machine Learning. He has acted as an area co-chair for EACL 2017, EMNLP 2017, ACL 2019, EMNLP 2019 and CoNLL 2019 and as a senior area chair for Coling 2018. Vlachos's work on automated verification has been covered by international media including the New York Times and he has been invited to speak on the topic to a number of public events such as the Internet Governance Forum. Apart from the FEVER 2018, 2019 and 2020 he has also organised the 3rd Structured Prediction in NLP collocated with NAACL 2019