| CARVIEW |
GLAMOR: Grounding Language in Actions, Multimodal Observations and Robots
Welcome to the GLAMOR Lab ✨ at the University of Southern California! We bring together natural language processing and robotics to connect language to the world (RoboNLP). Our lab is broadly interested in connecting language to agent perception and action, and lifelong learning through interaction.
Research Areas
Language & Perception
Language paired with sensory perception like vision, audio, and haptics. This scope includes audio-visual speech recognition, visual dialog, and recognizing heavy means increased physical weight.
Embodiment & Action
Language paired with or leading to world actions. This scope includes learning that left corresponds to a spatial orientation, and that it's hot is a pragmatic warning against physically touching an object.
The Social World
Language is what language does, and so language use in social contexts to cause changes in others' behavior and states of mind is the highest scope for grounded natural langauge use.
News
- 07/2025 Two papers accepted to CoRL 2025!
- 01/2025 Two papers accepted to NAACL 2025!
- 10/2025 One paper each at EMNLP and CoRL!
- 08/2024 Tejas has been selected as an Amazon ML Fellow by the USC-Amazon Center on Secure and Trusted Machine Learning!
- 05/2024 One paper accepted to RSS 2024 and one paper accepted to ACL Findings 2024!
- 03/2024 New pre-print on multi-agent task planning!
- 03/2024 Three papers accepted to NAACL 2024!
- 02/2024 New pre-prints on visual document understanding and selective prediction for vision-language reasoning!
- 10/2023 Chain-of-Questions has been accepted to EMNLP 2023!
- 05/2023 Self-supervised 3D Representations is a lightning talk at ICRA Pretraining for Robotics Workshop
- 05/2023 ViSaRL: Visual RL Guided By Human Saliency is a Spotlight Talk at ICRA Pretraining for Robotics Workshop
- 05/2023 One paper accepted to Interspeech 2023!
- 05/2023 One paper accepted to CoLLAs 2023!
- 05/2023 Leticia and Tejas received the Viterbi Graduate Mentorship and Viterbi Undergraduate Research Mentorship Awards, respectively!
- 04/2023 The lab participated in the USC Robotics Open House for middle and high school students.
- 04/2023 Curriculum Learning for Data-Efficient Vision-Language Alignment will be presented at the O-DRUM Workshop at CVPR 2023.
- 02/2023 New pre-print! We study the challenge of training embodied agents that can follow spoken instructions.
- 02/2023 IVLN has been accepted to CVPR 2023!
- 01/2023 Lee's paper on sign language phonology has been accepted to EACL 2023!
- 01/2023 ProgPrompt has been accepted to ICRA 2023!
- 12/2022 Transformer Adapters for Robot Learning is a Spotlight Talk at the Pretraining Robot Learning workshop at CoRL 2022!
- 11/2022 One paper accepted to EMNLP 2022! Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems
- 10/2022 New VLN benchmark release! IVLN challenges agents to follow a language-guided tour of a home, enabling them to leverage persistent memory.
- 09/2022 New pre-print! ProgPrompt adapts LLMs for situated robot task planning by prompting them with pythonic programs.
- 09/2022 CLiMB 🧗♂️ was accepted to the NeurIPS 2022 Datasets and Benchmarks Track!
- 06/2022 Pre-print alert! We introduce CLiMB 🧗♂️, a new continual learning benchmark for vision-and-language tasks.
- 05/2022 REU research opportunity! Available Fall 2022-Spring 2023. More details.
- 04/2022 Prof. Thomason talked to high school students at Viterbi K-12 STEM Center about robotics at Robotics Ed Week.
