| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://arijitray.com/SAT
x-github-request-id: 55F1:36A0B4:81CD12:91D0EE:6951CEBD
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 00:43:42 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210040-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766969022.007599,VS0,VE198
vary: Accept-Encoding
x-fastly-request-id: 76a2b157c778ac1901ec55a8a220f794da85d900
content-length: 162
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://arijitray.com/SAT/
x-github-request-id: B882:15317B:820520:9208FF:6951CEBE
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 00:43:42 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210098-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766969022.332670,VS0,VE198
vary: Accept-Encoding
x-fastly-request-id: e894464f5e74482c25c9936edf426e766b86c335
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 07 Nov 2025 17:10:33 GMT
access-control-allow-origin: *
etag: W/"690e2809-3fb0"
expires: Mon, 29 Dec 2025 00:53:42 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: B1A1:2F7ECD:81F0FE:91F3BA:6951CEBE
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 00:43:42 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210098-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766969023.544430,VS0,VE228
vary: Accept-Encoding
x-fastly-request-id: 5d3e233afde74ec2df7c79bf20b5dd8b4437f4d3
content-length: 3978
SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models
We take actions in a 3D simulator and use privileged 3D information about the assets. We use
natural language descriptions of the assets and make question-answer pairs based on how the 3D nature of the scene changes with the actions taken.
SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models
Arijit Ray1,
Jiafei Duan2 †,
Ellis Brown2 †,
Reuben Tan1,4,
Dina Bashkirova1,
Rose Hendrix3,
Kiana Ehsani3,
Aniruddha Kembhavi3,
Bryan A. Plummer1,
Ranjay Krishna2,3*,
Kuo-Hao Zeng3*,
Kate Saenko1*
1Boston University,
2University of Washington,
3Allen AI,
4Microsoft Research (MSR)
5New York University
*equal advising, † joint second author
Abstract
Reasoning about motion and space is a fundamental cognitive capability that is required by multiple real-world applications. While many studies highlight that large multimodal language models (MLMs) struggle to reason about space, they only focus on static spatial relationships and not dynamic awareness of motion and space---i.e. reasoning about the effect of egocentric and object motions on spatial relationships. Manually annotating such object and camera movements is expensive. Hence, we introduce SAT, a simulated spatial aptitude training dataset comprising both static and dynamic spatial reasoning across 175K question-answer (QA) pairs and 20K scenes. Complementing this, we also construct a small (150 image-QAs) yet challenging dynamic spatial test set using real-world images. Leveraging our SAT datasets and 6 existing static spatial benchmarks, we systematically investigate what improves both static and dynamic spatial awareness. Our results reveal that simulations are surprisingly effective at imparting spatial aptitude to MLMs that translate to real images. We show that perfect annotations in simulation are more effective than existing approaches of pseudo-annotating real images. For instance, SAT training improves a LLaVA-13B model by an average 11% and a LLaVA-Video-7B model by an average 8% on multiple spatial benchmarks, including our real-image dynamic test set and spatial reasoning on long videos---even outperforming some large proprietary models. While reasoning over static relationships improves with synthetic training data, there is still considerable room for improvement for dynamic reasoning questions.
Approach
We take actions in a 3D simulator and use privileged 3D information about the assets. We use
natural language descriptions of the assets and make question-answer pairs based on how the 3D nature of the scene changes with the actions taken.
Simulated Training Data
Results
Open-source MLMs struggle on dynamic spatial reasoning as well as large proprietary models
Simulated SAT data improves spatial performance on real static benchmarks
Fine-tuning on image-based SAT also improves performance videos, VSI-Bench (Yang et al, 2024)
Dynamic movements further help static perception!
Annotation quality is more important than realism!
BibTeX
@misc{ray2025satdynamicspatialaptitude,
title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models},
author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
year={2025},
eprint={2412.07755},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.07755},
}