I am a first year PhD student trying to discover how the interplay of NLP and robotics can mutually benefit both fields. I am co-advised by Alessandro Roncone as part of the HIRO group and by Katharina Kann as part of the NALA group. Prior to CU Boulder, I completed both my bachelors of applied science in computer engineering and my masters of computer science at the University of Toronto under Frank Rudzicz with a focus on NLP. Currently trying to find ther right balance between refreshing 3090 stocks and climbing.
Research Direction:
Natural language is the most organic and generalizable way for humans to specify a task, provide new information, and convey intentions. Leveraging language for task specification and skill transfer would greatly increase the ability and transferability of current robots. However, current language models fail to understand language as humans do. Trained solely on text, we hypothesize their lack of real-world experience inherently limits their ability for human-like understanding of language. To this end, we aim at bridging the gap between the field of robotics and NLP to produce robots that can act and learn through language, and who in turn will generate experiences for it develop a richer understanding of language.
Selected Publications:
On Losses for Modern Language Models
Aroca-Ouellette, Stéphane,
and Rudzicz, Frank
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
2020
BERT set many state-of-the-art results over varied NLU benchmarks by pre-training over two tasks: masked language modelling (MLM) and next sentence prediction (NSP), the latter of which has been highly criticized. In this paper, we 1) clarify NSP’s effect on BERT pre-training, 2) explore fourteen possible auxiliary pre-training tasks, of which seven are novel to modern language models, and 3) investigate different ways to include multiple tasks into pre-training. We show that NSP is detrimental to training due to its context splitting and shallow semantic signal. We also identify six auxiliary pre-training tasks – sentence ordering, adjacent sentence prediction, TF prediction, TF-IDF prediction, a FastSent variant, and a Quick Thoughts variant – that outperform a pure MLM baseline. Finally, we demonstrate that using multiple tasks in a multi-task pre-training framework provides better results than using any single auxiliary task. Using these methods, we outperform BERTBase on the GLUE benchmark using fewer than a quarter of the training tokens.
PROST: Physical Reasoning about Objects through Space and Time
Aroca-Ouellette, Stéphane,
Paik, Cory,
Roncone, Alessandro,
and Kann, Katharina
In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2021
We present a new probing dataset named PROST: Physical Reasoning about Objects Through Space and Time.
This dataset contains 18,736 multiple-choice questions made from 14 manually curated templates, covering 10 physical reasoning concepts. All questions are designed to probe both causal and masked language models in a zero-shot setting. We conduct an extensive analysis which demonstrates that state-of-the-art pretrained models are inadequate at physical reasoning: they are influenced by the order in which answer options are presented to them, they struggle when the superlative in a question is inverted (e.g., most <-> least), and increasing the amount of pretraining data and parameters only yields minimal improvements. These results provide support for the hypothesis that current pretrained models’ ability to reason about physical interactions is inherently limited by a lack of real world experience. By highlighting these limitations, we hope to motivate the development of models with a human-like understanding of the physical world.
Feel free to send me an email if you have any questions about my work (include [Q] in the subject) or are interested in collaborations (include [C] in the subject)