| CARVIEW |
Select Language
HTTP/2 200
server: Google Frontend
cache-control: max-age=3600
x-cloud-trace-context: 79ed6c384f19ca38f10000bb07035d51
last-modified: Fri, 14 Apr 2023 00:03:06 GMT
content-type: text/html; charset=utf-8
via: 1.1 google, 1.1 varnish, 1.1 varnish, 1.1 varnish
x-frame-options: SAMEORIGIN
content-security-policy: frame-ancestors 'none'
accept-ranges: bytes
age: 1513361
date: Thu, 01 Jan 2026 01:53:32 GMT
x-served-by: cache-lga21993-LGA, cache-lga21922-LGA, cache-bom-vanm7210083-BOM
x-cache: MISS, HIT, HIT
x-timer: S1767232413.940135,VS0,VE1
content-length: 46897
[2210.07474] SQA3D: Situated Question Answering in 3D Scenes
Skip to main content
[v1] Fri, 14 Oct 2022 02:52:26 UTC (22,610 KB)
[v2] Sat, 22 Oct 2022 15:25:26 UTC (22,610 KB)
[v3] Sat, 11 Feb 2023 01:57:41 UTC (22,590 KB)
[v4] Wed, 22 Feb 2023 08:25:24 UTC (22,590 KB)
[v5] Wed, 12 Apr 2023 20:05:41 UTC (22,590 KB)
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Computer Vision and Pattern Recognition
arXiv:2210.07474 (cs)
[Submitted on 14 Oct 2022 (v1), last revised 12 Apr 2023 (this version, v5)]
Title:SQA3D: Situated Question Answering in 3D Scenes
View a PDF of the paper titled SQA3D: Situated Question Answering in 3D Scenes, by Xiaojian Ma and 6 other authors
View PDF
Abstract:We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context (e.g., 3D scan), SQA3D requires the tested agent to first understand its situation (position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k unique situations, along with 20.4k descriptions and 33.4k diverse reasoning questions for these situations. These questions examine a wide spectrum of reasoning capabilities for an intelligent agent, ranging from spatial relation comprehension to commonsense understanding, navigation, and multi-hop reasoning. SQA3D imposes a significant challenge to current multi-modal especially 3D reasoning models. We evaluate various state-of-the-art approaches and find that the best one only achieves an overall score of 47.20%, while amateur human participants can reach 90.06%. We believe SQA3D could facilitate future embodied AI research with stronger situation understanding and reasoning capability.
| Comments: | ICLR 2023. First two authors contributed equally. Project website: this https URL |
| Subjects: | Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG) |
| Cite as: | arXiv:2210.07474 [cs.CV] |
| (or arXiv:2210.07474v5 [cs.CV] for this version) | |
| https://doi.org/10.48550/arXiv.2210.07474
arXiv-issued DOI via DataCite
|
Submission history
From: Xiaojian Ma [view email][v1] Fri, 14 Oct 2022 02:52:26 UTC (22,610 KB)
[v2] Sat, 22 Oct 2022 15:25:26 UTC (22,610 KB)
[v3] Sat, 11 Feb 2023 01:57:41 UTC (22,590 KB)
[v4] Wed, 22 Feb 2023 08:25:24 UTC (22,590 KB)
[v5] Wed, 12 Apr 2023 20:05:41 UTC (22,590 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
View a PDF of the paper titled SQA3D: Situated Question Answering in 3D Scenes, by Xiaojian Ma and 6 other authors
Current browse context:
cs.CV
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.