| CARVIEW |
Select Language
HTTP/1.1 301 Moved Permanently
Date: Wed, 31 Dec 2025 23:37:37 GMT
Server: Apache/2.4.41 (Ubuntu)
Location: https://aclanthology.org/D17-1321/
Content-Length: 325
Content-Type: text/html; charset=iso-8859-1
HTTP/1.1 200 OK
Date: Wed, 31 Dec 2025 23:37:37 GMT
Server: Apache/2.4.41 (Ubuntu)
Last-Modified: Mon, 29 Dec 2025 16:44:12 GMT
ETag: "89be-64719f3e90ddf-gzip"
Accept-Ranges: bytes
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 8670
Content-Type: text/html; charset=utf-8
Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog - ACL Anthology
Correct Metadata for
Abstract
A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision! In this paper, using a Task & Talk reference game between two agents as a testbed, we present a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge ‘naturally’,despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.- Anthology ID:
- D17-1321
- Volume:
- Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
- Month:
- September
- Year:
- 2017
- Address:
- Copenhagen, Denmark
- Editors:
- Martha Palmer, Rebecca Hwa, Sebastian Riedel
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2962–2967
- Language:
- URL:
- https://aclanthology.org/D17-1321/
- DOI:
- 10.18653/v1/D17-1321
- Bibkey:
- Cite (ACL):
- Satwik Kottur, José Moura, Stefan Lee, and Dhruv Batra. 2017. Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2962–2967, Copenhagen, Denmark. Association for Computational Linguistics.
- Cite (Informal):
- Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog (Kottur et al., EMNLP 2017)
- Copy Citation:
- PDF:
- https://aclanthology.org/D17-1321.pdf
- Attachment:
- D17-1321.Attachment.zip
Export citation
@inproceedings{kottur-etal-2017-natural,
title = "Natural Language Does Not Emerge `Naturally' in Multi-Agent Dialog",
author = "Kottur, Satwik and
Moura, Jos{\'e} and
Lee, Stefan and
Batra, Dhruv",
editor = "Palmer, Martha and
Hwa, Rebecca and
Riedel, Sebastian",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D17-1321/",
doi = "10.18653/v1/D17-1321",
pages = "2962--2967",
abstract = "A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision! In this paper, using a Task {\&} Talk reference game between two agents as a testbed, we present a sequence of `negative' results culminating in a `positive' one {--} showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge `naturally',despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate."
}<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="https://www.loc.gov/mods/v3">
<mods ID="kottur-etal-2017-natural">
<titleInfo>
<title>Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog</title>
</titleInfo>
<name type="personal">
<namePart type="given">Satwik</namePart>
<namePart type="family">Kottur</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">José</namePart>
<namePart type="family">Moura</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Stefan</namePart>
<namePart type="family">Lee</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Dhruv</namePart>
<namePart type="family">Batra</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2017-09</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</title>
</titleInfo>
<name type="personal">
<namePart type="given">Martha</namePart>
<namePart type="family">Palmer</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Rebecca</namePart>
<namePart type="family">Hwa</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Sebastian</namePart>
<namePart type="family">Riedel</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>Association for Computational Linguistics</publisher>
<place>
<placeTerm type="text">Copenhagen, Denmark</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
</relatedItem>
<abstract>A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision! In this paper, using a Task & Talk reference game between two agents as a testbed, we present a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge ‘naturally’,despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.</abstract>
<identifier type="citekey">kottur-etal-2017-natural</identifier>
<identifier type="doi">10.18653/v1/D17-1321</identifier>
<location>
<url>https://aclanthology.org/D17-1321/</url>
</location>
<part>
<date>2017-09</date>
<extent unit="page">
<start>2962</start>
<end>2967</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings %T Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog %A Kottur, Satwik %A Moura, José %A Lee, Stefan %A Batra, Dhruv %Y Palmer, Martha %Y Hwa, Rebecca %Y Riedel, Sebastian %S Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing %D 2017 %8 September %I Association for Computational Linguistics %C Copenhagen, Denmark %F kottur-etal-2017-natural %X A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision! In this paper, using a Task & Talk reference game between two agents as a testbed, we present a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge ‘naturally’,despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate. %R 10.18653/v1/D17-1321 %U https://aclanthology.org/D17-1321/ %U https://doi.org/10.18653/v1/D17-1321 %P 2962-2967
Markdown (Informal)
[Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog](https://aclanthology.org/D17-1321/) (Kottur et al., EMNLP 2017)
- Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog (Kottur et al., EMNLP 2017)
ACL
- Satwik Kottur, José Moura, Stefan Lee, and Dhruv Batra. 2017. Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2962–2967, Copenhagen, Denmark. Association for Computational Linguistics.