| CARVIEW |
Select Language
HTTP/2 200
content-type: text/html; charset=utf-8
x-cloud-trace-context: b965c2d50797f830a3561a6934b57842
cache-control: max-age=3600
server: Google Frontend
last-modified: Wed, 16 Jul 2025 00:14:11 GMT
x-frame-options: SAMEORIGIN
content-security-policy: frame-ancestors 'none'
via: 1.1 google, 1.1 varnish, 1.1 varnish, 1.1 varnish
accept-ranges: bytes
age: 1010719
date: Thu, 01 Jan 2026 02:41:35 GMT
x-served-by: cache-lga21942-LGA, cache-lga21947-LGA, cache-bom-vanm7210066-BOM
x-cache: MISS, HIT, HIT
x-timer: S1767235295.401139,VS0,VE199
content-length: 48150
[2410.08193] GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment
Skip to main content
[v1] Thu, 10 Oct 2024 17:58:24 UTC (453 KB)
[v2] Tue, 28 Jan 2025 03:28:12 UTC (462 KB)
[v3] Mon, 10 Feb 2025 22:20:07 UTC (467 KB)
[v4] Wed, 11 Jun 2025 06:11:03 UTC (467 KB)
[v5] Tue, 15 Jul 2025 00:32:25 UTC (439 KB)
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Computation and Language
arXiv:2410.08193 (cs)
[Submitted on 10 Oct 2024 (v1), last revised 15 Jul 2025 (this version, v5)]
Title:GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment
Authors:Yuancheng Xu, Udari Madhushani Sehwag, Alec Koppel, Sicheng Zhu, Bang An, Furong Huang, Sumitra Ganesh
View a PDF of the paper titled GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment, by Yuancheng Xu and 6 other authors
View PDF
Abstract:Large Language Models (LLMs) exhibit impressive capabilities but require careful alignment with human preferences. Traditional training-time methods finetune LLMs using human preference datasets but incur significant training costs and require repeated training to handle diverse user preferences. Test-time alignment methods address this by using reward models (RMs) to guide frozen LLMs without retraining. However, existing test-time approaches rely on trajectory-level RMs which are designed to evaluate complete responses, making them unsuitable for autoregressive text generation that requires computing next-token rewards from partial responses. To address this, we introduce GenARM, a test-time alignment approach that leverages the Autoregressive Reward Model--a novel reward parametrization designed to predict next-token rewards for efficient and effective autoregressive generation. Theoretically, we demonstrate that this parametrization can provably guide frozen LLMs toward any distribution achievable by traditional RMs within the KL-regularized reinforcement learning framework. Experimental results show that GenARM significantly outperforms prior test-time alignment baselines and matches the performance of training-time methods. Additionally, GenARM enables efficient weak-to-strong guidance, aligning larger LLMs with smaller RMs without the high costs of training larger models. Furthermore, GenARM supports multi-objective alignment, allowing real-time trade-offs between preference dimensions and catering to diverse user preferences without retraining. Our project page is available at: this https URL.
| Comments: | Published at the Thirteenth International Conference on Learning Representations (ICLR 2025) |
| Subjects: | Computation and Language (cs.CL) |
| Cite as: | arXiv:2410.08193 [cs.CL] |
| (or arXiv:2410.08193v5 [cs.CL] for this version) | |
| https://doi.org/10.48550/arXiv.2410.08193
arXiv-issued DOI via DataCite
|
Submission history
From: Yuancheng Xu [view email][v1] Thu, 10 Oct 2024 17:58:24 UTC (453 KB)
[v2] Tue, 28 Jan 2025 03:28:12 UTC (462 KB)
[v3] Mon, 10 Feb 2025 22:20:07 UTC (467 KB)
[v4] Wed, 11 Jun 2025 06:11:03 UTC (467 KB)
[v5] Tue, 15 Jul 2025 00:32:25 UTC (439 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
View a PDF of the paper titled GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment, by Yuancheng Xu and 6 other authors
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.