| CARVIEW |
Select Language
HTTP/2 200
last-modified: Thu, 04 Feb 2021 01:07:35 GMT
cache-control: max-age=3600
content-type: text/html; charset=utf-8
content-security-policy: frame-ancestors 'none'
x-frame-options: SAMEORIGIN
x-cloud-trace-context: 7389d28baaafa38140e7ee4165c0ef7a
server: Google Frontend
via: 1.1 google, 1.1 varnish, 1.1 varnish
accept-ranges: bytes
age: 265769
date: Thu, 01 Jan 2026 01:25:26 GMT
x-served-by: cache-lga21968-LGA, cache-bom-vanm7210073-BOM
x-cache: HIT, HIT
x-timer: S1767230726.400566,VS0,VE200
content-length: 45523
[2102.01860] L2C: Describing Visual Differences Needs Semantic Understanding of Individuals
Skip to main content
In just 5 minutes help us improve arXiv:
Annual Global Survey
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Computer Vision and Pattern Recognition
arXiv:2102.01860 (cs)
[Submitted on 3 Feb 2021]
Title:L2C: Describing Visual Differences Needs Semantic Understanding of Individuals
View a PDF of the paper titled L2C: Describing Visual Differences Needs Semantic Understanding of Individuals, by An Yan and 3 other authors
View PDF
Abstract:Recent advances in language and vision push forward the research of captioning a single image to describing visual differences between image pairs. Suppose there are two images, I_1 and I_2, and the task is to generate a description W_{1,2} comparing them, existing methods directly model { I_1, I_2 } -> W_{1,2} mapping without the semantic understanding of individuals. In this paper, we introduce a Learning-to-Compare (L2C) model, which learns to understand the semantic structures of these two images and compare them while learning to describe each one. We demonstrate that L2C benefits from a comparison between explicit semantic representations and single-image captions, and generalizes better on the new testing image pairs. It outperforms the baseline on both automatic evaluation and human evaluation for the Birds-to-Words dataset.
| Comments: | EACL-2021 short |
| Subjects: | Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) |
| Cite as: | arXiv:2102.01860 [cs.CV] |
| (or arXiv:2102.01860v1 [cs.CV] for this version) | |
| https://doi.org/10.48550/arXiv.2102.01860
arXiv-issued DOI via DataCite
|
Full-text links:
Access Paper:
- View PDF
- TeX Source
View a PDF of the paper titled L2C: Describing Visual Differences Needs Semantic Understanding of Individuals, by An Yan and 3 other authors
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.