| CARVIEW |
UnImplicit: The First Workshop on
Understanding Implicit and Underspecified Language
Workshop to be held online in conjunction with ACL-IJCNLP 2021
Recent developments in NLP have led to excellent performance on various semantic tasks. However, an important question that remains open is whether such methods are actually capable of modeling how linguistic meaning is shaped and influenced by context, or if they simply learn superficial patterns that reflect only explicitly stated aspects of meaning. An interesting case in point is the interpretation and understanding of implicit or underspecified language.
More concretely, language utterances may contain empty or fuzzy elements, such as the following: units of measurement, as in "she is 30" vs. "it costs 30" (30 what?), bridges and other missing links, as in "she tried to enter the car, but the door was stuck" (the door of what?), implicit semantic roles, as in "I met her while driving" (who was driving?), and various sorts of gradable phenomena; is a "small elephant" smaller than a "big bee"? Where is the boundary between "orange" and "red"?
Implicit and underspecified phenomena have been studied in linguistics and philosophy for decades (Sag, 1976; Heim, 1982; Ballmer and Pinkal, 1983), but empirical studies in NLP are scarce and far between. The number of datasets and task proposals is however growing (Roesiger et al., 2018; Elazar and Goldberg, 2019; Ebner et al., 2020; McMahan and Stone, 2020) and recent studies have shown the difficulty of annotating and modeling implicit and underspecified phenomena (Shwartz and Dagan, 2016; Scholman and Demberg, 2017; Webber et al., 2019).
The use of implicit and underspecified terms poses serious challenges to standard natural language processing models, and they often require incorporating greater context, using symbolic inference and common-sense reasoning, or more generally, going beyond strictly lexical and compositional meaning constructs. This challenge spans all phases of the NLP model's life cycle: from collecting and annotating relevant data, through devising computational methods for modelling such phenomena, to evaluating and designing proper evaluation metrics.
Furthermore, most existing efforts in NLP are concerned with one particular problem, their benchmarks are narrow in size and scope, and no common platform or standards exist for studying effects on downstream tasks. In our opinion, interpreting implicit and underspecified language is an inherent part of natural language understanding, these elements are essential for human-like interpretation, and modeling them may be critical for downstream applications.
The goal of this workshop is to bring together theoreticians and practitioners from the entire NLP cycle, from annotation and benchmarking to modeling and applications, and to provide an umbrella for the development, discussion and standardization of the study of understanding implicit and underspecified language. We solicit papers on the following, and other, topics:
- Verb-phrase ellipsis and syntactic gaps
- Implicit semantic roles and semantic relations
- Bridging anaphora
- Gradable/imprecise terms
- Fused heads
- Other phenomena that involve underspecification or implicit information
Invited Speakers
Martha Palmer
University of Colorado at BoulderThis talk will discuss symbolic representations of sentences in context, ranging from universal dependencies to abstract meaning representations (AMR), and examine their capability for capturing certain aspects of meaning. A main focus will be the ways in which AMR's can be expanded to encompass figurative language, the recovery of implicit arguments and relations between events. These examples will be primarily in English, and indeed some features of AMR are fairly English-centric. The talk will conclude by introducing Uniform Meaning Representations, a multi-sentence annotation scheme that is revising AMR’s to make them more suitable for other languages, especially low resource languages, and expanding the annotation guidelines to include Number, Tense, Aspect and Modality as well as Temporal Relations.
Chris Potts
Stanford UniversityQuestions Under Discussion (QUDs) have come to occupy a central place in theories of pragmatics. QUDs are abstract, implicit questions that evolve along with the discourse, determining what is relevant and shaping speaker choices and listener inferences. QUDs have been identified as a factor in numerous diverse phenomena, including discourse particles, presuppositions, intonational meaning, and conversational implicatures. How can we leverage these insights to create better large-scale NLP systems? In this talk, I'll survey a range of approaches to modeling (or approximating) QUDs in ways that are scalable and easy to integrate into standard NLP architectures. In addition, I'll identify NLP tasks that can clearly benefit from QUDs. Free-form dialogue applications tend to come to mind first, and the value of QUDs for such tasks seems clear, so I will concentrate on simpler generation tasks – especially image description and question answering – that can benefit from QUD-based control.
Shared Task
As part of the workshop, we are organizing a shared task on implicit and underspecified language. The focus of this task is on modeling the necessity of clarifications due to aspects of meaning that are implicit or underspecified in context. Specifically, the task setting follows the recent proposal of predicting revision requirements in collaboratively edited instructions (Bhat et al., 2020). The data consists of instances from wikiHowToImprove (Anthonio et al., 2020) in which a revision resolved an implicit or underspecified linguist element. The following revision types are part of the data:
- Replacements of pronouns with more precise noun phrases
- Replacements of 'do' as a full verb with more precise verbs
- Insertions of optional verbal phrase complements
- Insertions of adverbial and adjectival modifiers
- Insertions of logical quantifiers and modal verbs
Final training and development sets are available here:
Access to the test data requires registration as a participant. If you are interested in participating in the shared task, please contact Michael Roth.
Workshop Program
(all times shown in UTC+2)
| 16:50 | Opening |
| 17:00 |
Invited talk [slides] [video] Martha Palmer |
| 18:00 | Poster session I |
| Let's be explicit about that: Distant supervision for implicit discourse relation classification via connective prediction [paper] Murathan Kurfalı and Robert Östling | |
| Implicit Phenomena in Short-answer Scoring Data [paper] Marie Bexte, Andrea Horbach and Torsten Zesch | |
| Evaluation Guidelines to Deal with Implicit Phenomena to Assess Factuality in Data-to-Text Generation [paper] Roy Eisenstadt and Michael Elhadad | |
| UnImplicit Shared Task Report: Detecting Clarification Requirements in Instructional Text [paper] Michael Roth and Talita Anthonio | |
| Abstracts | Is Sluice Resolution really just Question Answering? Peratham Wiriyathammabhum |
| Decontextualization: Making Sentences Stand-Alone Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das and Michael Collins | |
| (Re)construing meaning in NLP Sean Trott, Tiago Timponi Torrent, Nancy Chang and Nathan Schneider | |
| Modelling Entity Implicature based on Systemic Functional Linguistics Hawre Hosseini, Mehran Mansouri and Ebrahim Bagheri | |
| Meaning Representation of Numeric Fused-Heads in UCCA Ruixiang Cui and Daniel Hershcovich | |
| Underspecification in Executable Instructions Valentina Pyatkin, Royi Lachmy and Reut Tsarfaty | |
| Findings | Investigating Transfer Learning in Multilingual Pre-trained Language Models through Chinese Natural Language Inference Hai Hu, Yiwen Zhang, Yina Patterson, Yanting Li, Yixin Nie and Kyle Richardson |
| 19:00 | Working group session I (discussion / presentation)
A. Challenges and best-practices in data collection and annotation of implicit phenomena B. What is the range of implicit phenomena? (produce a taxonomy) C. What are the next steps in implicit and underspecified language research? |
| 20:00 | Poster session II |
| Improvements and Extensions on Metaphor Detection [paper] Weicheng Ma, Ruibo Liu, Lili Wang and Soroush Vosoughi | |
| Human-Model Divergence in the Handling of Vagueness [paper] Elias Stengel-Eskin, Jimena Guallar-Blasco and Benjamin Van Durme | |
| TTCB System Description to a Shared Task on Implicit and Underspecified Language 2021 [paper] Peratham Wiriyathammabhum | |
| A Mention-Based System for Revision Requirements Detection [paper] Ahmed Ruby, Christian Hardmeier and Sara Stymne/ | |
| Abstracts | Superlatives in Discourse: Explicit and Implicit Domain Restrictions for Superlatives Valentina Pyatkin, Ido Dagan and Reut Tsarfaty |
| Transformer-based language models and complement coercion: Experimental studies Yuling Gu | |
| Large Scale Crowdsourcing of Noun-Phrase Links Victoria Basmov, Yanai Elazar, Yoav Goldberg and Reut Tsarfaty | |
| Variation in conventionally implicated content: An empirical study in English and German Annette Hautli-Janisz and Diego Frassinelli | |
| Challenges in Detecting Null Relativizers in African American Language for Sociolinguistic and Psycholinguistic Applications Anissa Neal, Brendan O'Connor and Lisa Green | |
| Incorporating Human Explanations for Robust Hate Speech Detection Jennifer Chen, Faisal Ladhak, Daniel Li and Noémie Elhadad | |
| Findings | John praised Mary because _he_? Implicit Causality Bias and Its Interaction with Explicit Cues in LMs Yova Kementchedjhieva, Mark Anderson and Anders Søgaard |
| 20:55 |
Invited talk [slides] [video] Chris Potts |
| 22:05 | Working group session II (discussion / presentation)
D. ML-based modeling of different implicit-language phenomena/tasks E. What are the existing and/or possible NLP tasks around implicit phenomena? F. What would be a good shared task around implicit and underspecified language? |
| 23:00 | (Official) closing |
Important Dates
December 21, 2020: First call for workshop papersFebruary 15, 2021: Second call for workshop papersApril 14, 2021: Start of shared task evaluationApril 30, 2021: Regular workshop paper due dateMay 3, 2021: End of shared task evaluationMay 21, 2021: Shared task papers due dateMay 28, 2021: Notification of acceptanceJune 7, 2021: Camera-ready papers due- August 5, 2021: Workshop date
Organizers
- Michael Roth, Stuttgart University
- Reut Tsarfaty, Bar-Ilan University
- Yoav Goldberg, Bar-Ilan University and AI2
Program Committee
- Omri Abend, Hebrew University of Jerusalem
- Johan Bos, University of Groningen
- Nancy Chang, Google
- Vera Demberg, Saarland University
- Katrin Erk, University of Texas at Austin
- Annemarie Friedrich, Bosch Center for Artificial Intelligence
- Dan Goldwasser, Purdue University
- Yufang Hou, IBM Research Ireland
- Ruihong Huang, Texas A&M University
- Mirella Lapata, University of Edinburgh
- Junyi Jessy Li, University of Texas at Austin
- Ray Mooney, University of Texas at Austin
- Philippe Muller, University of Toulouse
- Vincent Ng, University of Texas at Dallas
- Tim O'Gorman, University of Massachusetts Amherst
- Karl Pichotta, Memorial Sloan Kettering Cancer Center
- Massimo Poesio, Queen Mary University
- Niko Schenk, Amazon
- Nathan Schneider, Georgetown University
- Vered Shwartz, Allen Institute for AI & University of Washington
- Elior Sulem, University of Pennsylvania
- Sara Tonelli, Fondazione Bruno Kessler
- Ben Van Durme, Johns Hopkins University & Microsoft Semantic Machines
- Luke Zettlemoyer, University of Washington & Facebook