| CARVIEW |
Event2Mind
Commonsense Inference on Events, Intents and Reactions
Quick links: [ACL paper] [ACL poster] [download dataset] [AllenNLP demo] [contact us]
Knowledge Graph Browser
Select an event and see our annotations (or type to search):
Annotation explanation
PersonX's intent: What are the likely reasons for PersonX causing the event (if any)?
PersonX's reaction: How might PersonX likely feel after the event?
Other people's reaction: How might others (perhaps implied) participants feel after the event?
PersonX (PersonY, ...) represent event participants.
____ is a placeholder for possible words or phrases that could used with the event.
Project description
In Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants.
We then train a neural network that can generate intents and reactions for unseen events. A demo of that system is available on the AllenNLP demo page.
As a case study, we then investigate character portrayal in movies. Using our neural inference model, we computationally generate motivations and reactions for each character's actions, which we then correlate with the character's gender. Our findings demonstrate that Event2Mind style inference can help computationally uncover gender inequality in movies.
Read our paper for more:Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, & Yejin Choi (2018).
Event2Mind: Commonsense Inference on Events, Intents and Reactions. ACL [view pdf]
[show abstract]
Abstract
We investigate a new commonsense inference task: given an event described in a short free-form text ("X drinks coffee in the morning"), a system reasons about the likely intents ("X wants to stay awake") and reactions ("X feels alert") of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.