| CARVIEW |
Select Language
HTTP/2 200
last-modified: Tue, 09 Dec 2025 02:10:01 GMT
x-frame-options: SAMEORIGIN
server: Google Frontend
cache-control: max-age=3600
via: 1.1 google, 1.1 varnish, 1.1 varnish, 1.1 varnish
x-cloud-trace-context: cab61b004c06a146f96ad1f9118fe054
content-type: text/html; charset=utf-8
content-security-policy: frame-ancestors 'none'
accept-ranges: bytes
date: Thu, 01 Jan 2026 01:25:44 GMT
age: 1979086
x-served-by: cache-lga21993-LGA, cache-lga21926-LGA, cache-bom-vanm7210087-BOM
x-cache: MISS, HIT, HIT
x-timer: S1767230745.550671,VS0,VE0
content-length: 48655
[2511.20639] Latent Collaboration in Multi-Agent Systems
Skip to main content
[v1] Tue, 25 Nov 2025 18:56:57 UTC (2,194 KB)
[v2] Mon, 8 Dec 2025 04:05:49 UTC (2,196 KB)
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Computation and Language
arXiv:2511.20639 (cs)
[Submitted on 25 Nov 2025 (v1), last revised 8 Dec 2025 (this version, v2)]
Title:Latent Collaboration in Multi-Agent Systems
Authors:Jiaru Zou, Xiyuan Yang, Ruizhong Qiu, Gaotang Li, Katherine Tieu, Pan Lu, Ke Shen, Hanghang Tong, Yejin Choi, Jingrui He, James Zou, Mengdi Wang, Ling Yang
View a PDF of the paper titled Latent Collaboration in Multi-Agent Systems, by Jiaru Zou and 12 other authors
View PDF
HTML (experimental)
Abstract:Multi-agent systems (MAS) extend large language models (LLMs) from independent single-model reasoning to coordinative system-level intelligence. While existing LLM agents depend on text-based mediation for reasoning and communication, we take a step forward by enabling models to collaborate directly within the continuous latent space. We introduce LatentMAS, an end-to-end training-free framework that enables pure latent collaboration among LLM agents. In LatentMAS, each agent first performs auto-regressive latent thoughts generation through last-layer hidden embeddings. A shared latent working memory then preserves and transfers each agent's internal representations, ensuring lossless information exchange. We provide theoretical analyses establishing that LatentMAS attains higher expressiveness and lossless information preservation with substantially lower complexity than vanilla text-based MAS. In addition, empirical evaluations across 9 comprehensive benchmarks spanning math and science reasoning, commonsense understanding, and code generation show that LatentMAS consistently outperforms strong single-model and text-based MAS baselines, achieving up to 14.6% higher accuracy, reducing output token usage by 70.8%-83.7%, and providing 4x-4.3x faster end-to-end inference. These results demonstrate that our new latent collaboration framework enhances system-level reasoning quality while offering substantial efficiency gains without any additional training. Code and data are fully open-sourced at this https URL.
| Comments: | Project: this https URL |
| Subjects: | Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) |
| Cite as: | arXiv:2511.20639 [cs.CL] |
| (or arXiv:2511.20639v2 [cs.CL] for this version) | |
| https://doi.org/10.48550/arXiv.2511.20639
arXiv-issued DOI via DataCite
|
Submission history
From: Jiaru Zou [view email][v1] Tue, 25 Nov 2025 18:56:57 UTC (2,194 KB)
[v2] Mon, 8 Dec 2025 04:05:49 UTC (2,196 KB)
Full-text links:
Access Paper:
- View PDF
- HTML (experimental)
- TeX Source
View a PDF of the paper titled Latent Collaboration in Multi-Agent Systems, by Jiaru Zou and 12 other authors
Current browse context:
cs.CL
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.