| CARVIEW |
CopeNLU is a Natural Language Processing research group led by Isabelle Augenstein and Pepa Atanasova with a focus on researching methods for tasks that require a deep understanding of language, as opposed to shallow processing. We are affiliated with the Natural Language Processing Section, as well as with the Pioneer Centre for AI, at the Department of Computer Science, University of Copenhagen. We are interested in core methodology research on, among others, learning with limited training data and explainable AI; as well as applications thereof to tasks such as fact checking, gender bias detection and question answering. Our group is partly funded by an ERC Starting Grant on Explainable and Robust Automatic Fact Checking, as well as a Sapere Aude Research Leader fellowship on `Learning to Explain Attitudes on Social Media’.
Interests
- Natural Language Understanding
- Learning with Limited Labelled Data
- Explainable AI
- Fact Checking
- Question Answering
- Gender Bias Detection
News
Funded PhD and postdoc positions for start Autumn 2026
8 Papers Accepted to EMNLP 2025
PhD fellowships for start in Spring or Autumn 2026
3 Papers to be Presented at ACL 2025
4 Papers Accepted to NAACL 2025
5 Papers Accepted to EMNLP 2024
PhD fellowship on Interpretable Machine Learning available
Pepa has been appointed as a Tenure-Track Assistant Professor
Participate in research on explainable fact checking
Outstanding paper award at EACL 2024
People
Featured Publications
Factuality Challenges in the Era of Large Language Models
Recent Publications
Stress Testing Factual Consistency Metrics for Long-Document Summarization
Evaluation Framework for Highlight Explanations of Context Utilisation in Language Models
Expanding Computation Spaces of LLMs at Inference Time
Multi-Step Knowledge Interaction Analysis via Rank-2 Subspace Disentanglement
A Meta-Evaluation of Style and Attribute Transfer Metrics
Explainability and Interpretability of Multilingual Large Language Models: A Survey
FLARE: Faithful Logic-Aided Reasoning and Exploration
Graph-Guided Textual Explanation Generation Framework
Multi-Modal Framing Analysis of News
Presumed Cultural Identity: How Names Shape LLM Responses
Recent Posts
Pre-ACL 2025 Workshop in Copenhagen
ALPS 2021 tutorial 'Explainability for NLP'
EMNLP 2020 Beer Garden Meetup
Interested in joining us at the University of Copenhagen?
Projects
Learning with Limited Labelled Data
Learning with limited labelled data, including multi-task learning, weakly supervised and zero-shot learning
Stance Detection and Fact Checking
Determine the attitude expressed in a text towards a topic, and use this for automatic evidence-based fact checking
Explainable Machine Learning
Explaining relationships between inputs and outputs of black-box machine learning models
Social Bias Detection
Detecting social biases such as gender and racial bias, in text as well as in language models
Multilingual Learning and Multicultural Learning
Training models to work well for multiple languages and cultures, including low-resource ones
Scholarly Data Processing
Automatically processing scholarly data to assist researchers in finding publications, writing better papers, or tracking their impact.
Question Answering
Answering questions automatically, including in conversational settings
Knowledge Base Population
Extract information about entities, phrases and relations between them from text to populate knowledge bases
Contact
- augenstein@di.ku.dk
- Øster Voldgade 3, Copenhagen, Denmark