| CARVIEW |
Shall We Team Up: Exploring Spontaneous Cooperation of Competing LLM Agents
EMNLP 2024 (findings)
Brian Inhyuk Kwon6, Makoto Onizuka1, Shaojie Tang7, Chuan Xiao‡,1,8
4LLMC, NII, 5Fordham University 6University of California, Los Angeles
7University at Buffalo 8Nagoya University
†Denotes Equal Contribution ‡Denotes Corresponding Authors
Abstract
Large Language Models (LLMs) have increasingly been utilized in social simulations, where they are often guided by carefully crafted instructions to stably exhibit human-like behaviors during simulations. Nevertheless, we doubt the necessity of shaping agents' behaviors for accurate social simulations. Instead, this paper emphasizes the importance of spontaneous phenomena, wherein agents deeply engage in contexts and make adaptive decisions without explicit directions. We explored spontaneous cooperation across three competitive scenarios and successfully simulated the gradual emergence of cooperation, findings that align closely with human behavioral data. This approach not only aids the computational social science community in bridging the gap between simulations and real-world dynamics but also offers the AI community a novel method to assess LLMs' capability of deliberate reasoning.
Keynesian Beauty Contest (KBC)
Different Instructions
Different Temperatures
Different Models
Compare with Human Choices
Bertrand Competition (BC)
Two agents play as firms and decide the price of their products. They need to compete with each other through dynamically modifying the prices to maximize their profits.
Without Communication
With Communication
Changed at 400 rounds
Emergent Evacuation (EE)
A large number of agents as evacuees are escaping from an earthquake. They need to select and reach an appropriate exit, taking into account their physical and mental condition as well as the congestion in their surroundings.
100 Agents
400 Agents
| Round | 5 | 10 | 15 | 20 | 25 | 30 | 35 | 40 | 45 | 50 |
|---|---|---|---|---|---|---|---|---|---|---|
| Without Communication | 9.4 | 31.2 | 51.2 | 65.6 | 78.6 | 88.4 | 96.6 | 99.0 | 99.8 | 99.8 |
| With Communication | 9.8 | 31.6 | 48.8 | 67.2 | 80.6 | 92.2 | 97.2 | 98.8 | 99.8 | 100.0 |
| With Comm. and Uncooperative | 9.4 | 31.2 | 48.2 | 64.4 | 77.0 | 87.4 | 95.0 | 98.0 | 99.0 | 99.0 |
Cumulative count of agents who escaped (out of a total of 100 agents) over rounds under different settings. Generally, agents that communicate escape more quickly, and agents with uncooperative persona escape more slowly.
BibTeX
@inproceedings{wu-etal-2024-shall,
title = "Shall We Team Up: Exploring Spontaneous Cooperation of Competing {LLM} Agents",
author = "Wu, Zengqing and Peng, Run and Zheng, Shuyuan and Liu, Qianying and Han, Xu and Kwon, Brian and Onizuka, Makoto and Tang, Shaojie and Xiao, Chuan",
editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.297",
pages = "5163--5186",
abstract = "Large Language Models (LLMs) have increasingly been utilized in social simulations, where they are often guided by carefully crafted instructions to stably exhibit human-like behaviors during simulations. Nevertheless, we doubt the necessity of shaping agents{'} behaviors for accurate social simulations. Instead, this paper emphasizes the importance of spontaneous phenomena, wherein agents deeply engage in contexts and make adaptive decisions without explicit directions. We explored spontaneous cooperation across three competitive scenarios and successfully simulated the gradual emergence of cooperation, findings that align closely with human behavioral data. This approach not only aids the computational social science community in bridging the gap between simulations and real-world dynamics but also offers the AI community a novel method to assess LLMs{'} capability of deliberate reasoning.Our source code is available at https://github.com/wuzengqing001225/SABM{\_}ShallWeTeamUp",
}
}