| CARVIEW |
Archiki Prasad
My research focuses on Natural Language Processing and Machine Learning. In particular, I work on developing methods to evaluate and strengthen reasoning in Large Language Models (LLMs). I seek to develop methods that enable LLMs to identify and rectify issues in its reasoning, improve alignment, as well as to enhance the understanding of the reasoning process. I also explore practical applications of LLM reasoning in domains such as planning and coding.
During my PhD, I have been fortunate to intern at several prominent groups, including Google DeepMind with Pete Shaw, Kenton Lee, and Mandar Joshi (Summer 2025); FAIR (Meta) with Jason Weston and Maryam Fazel-Zarandi (Summer 2024); the Allen Institute for AI (AI2) with Tushar Khot, Ashish Sabharwal, and Peter Clark (Summer 2023); and Adobe Research in 2022.
Publications
2025
PRInTS: Reward Modeling for Long-Horizon Information Seeking
Jaewoo Lee, Archiki Prasad, Justin Chih-Yao Chen, Zaid Khan, Elias Stengel-Eskin, Mohit Bansal
Arxiv Preprint
pdf|
code
Learning to Generate Unit Tests for Automated Debugging
Archiki Prasad, Elias Stengel-Eskin, Justin Chih-Yao Chen, Zaid Khan, Mohit Bansal
COLM'25 | The Conference on Language Modeling
pdf|
code
Retrieval-Augmented Generation with Conflicting Evidence
Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
COLM'25 | The Conference on Language Modeling
pdf|
code
Self-Consistency Preference Optimization
Archiki Prasad, Weizhe Yuan, Richard Yuanzhe Pang, Jing Xu, Maryam Fazel-Zarandi, Mohit Bansal, Sainbayar Sukhbaatar, Jason Weston, Jane Yu
ICML'25 | International Conference on Machine Learning
pdf
LASeR: Learning to Adaptively Select Reward Models with Multi-Armed Bandits
Duy Nguyen*, Archiki Prasad*, Elias Stengel-Eskin, Mohit Bansal
NeurIPS'25 | Conference on Neural Information Processing Systems
pdf|
code
System-1.x: Learning to Balance Fast and Slow Planning with Language Models
Swarnadeep Saha, Archiki Prasad, Justin Chih-Yao Chen, Peter Hase, Elias Stengel-Eskin, Mohit Bansal
ICLR'25 | International Conference on Learning Representations
pdf|
code
Multi-Attribute Steering of Language Models via Targeted Intervention
Duy Nguyen, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
ACL'25 | Association for Computational Linguistics
pdf|
code
AdaCAD: Adaptively Decoding to Balance Conflicts between Contextual and Parametric Knowledge
Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
NAACL'25 | Nations of the Americas Chapter of the Association for Computational Linguistics
pdf|
code
MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning
Justin Chih-Yao Chen, Archiki Prasad, Swarnadeep Saha, Elias Stengel-Eskin, Mohit Bansal
EMNLP'25 | Empirical Methods in Natural Language Processing
pdf|
code
2024
ADaPT: As-Needed Decomposition and Planning with Language Models
Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, Tushar Khot
NAACL'24 (Findings) | Nations of the Americas Chapter of the Association of Computational Linguistics
pdf|
code|
project page
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models
Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
ICLR'24 | International Conference on Learning Representations
pdf|
code
Soft Self-Consistency Improves Language Model Agents
Han Wang*, Archiki Prasad*, Elias Stengel-Eskin*, Mohit Bansal
ACL'24 | Association for Computational Linguistics
pdf|
code
ReGAL: Refactoring Programs to Discover Generalizable Abstractions
Elias Stengel-Eskin*, Archiki Prasad*, Mohit Bansal
ICML'24 | International Conference on Machine Learning
pdf|
code
2023
ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness
Archiki Prasad, Swarnadeep Saha, Xiang Zhou, Mohit Bansal
EMNLP'23| Empirical Methods in Natural Language Processing
pdf|
code
MeetingQA: Extractive Question-Answering on Meeting Transcripts
Archiki Prasad, Trung Bui, Seunghyun Yoon, Hanieh Deilamsalehy, Franck Dernoncourt, Mohit Bansal
ACL'23 | Association for Computational Linguistics
pdf|
code + data|
project page
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
Archiki Prasad, Peter Hase, Xiang Zhou, Mohit Bansal
EACL'23 | European Chapter of the Association for Computational Linguistics
pdf|
code
2021
The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding
Archiki Prasad*, Mohammad Ali Rehan*, Shreya Pathak*, Preethi Jyothi
EMNLP'21 (MRL Workshop) | Multilingual Representation Learning Workship @EMNLP
🏆 Best Paper Honorable Mention
pdf|
code
An Investigation of End-to-End Models for Robust Speech Recognition
Archiki Prasad, Preethi Jyothi, Rajbabu Velmurugan
ICASSP'21 | International Conference on Acoustics, Speech, and Signal Processing
pdf|
code
Decentralized Age-of-Information Bandits
Archiki Prasad, Vishal Jain, Sharayu Moharir
WCNC'21 | IEEE Wireless Communications and Networking Conference
pdf
2020