| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 26 Dec 2025 07:17:10 GMT
access-control-allow-origin: *
etag: W/"694e3676-1d29"
expires: Mon, 29 Dec 2025 05:32:45 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 7028:3946E9:842543:94A5ED:69521024
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 05:22:45 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210096-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766985766.601276,VS0,VE204
vary: Accept-Encoding
x-fastly-request-id: e55905a3a0f60cd4c16878d4151a7b4ddc7e2914
content-length: 2320
Xiang Yue
Xiang Yue
岳翔 (Pronounced as "Shiang Yoo-eh")
xiangyue.work@gmail.com
I am an AI researcher at Meta Superintelligence Labs (MSL). Before joining Meta, I spent two wonderful years at Carnegie Mellon University (CMU) as a postdoctoral researcher, working with Prof. Graham Neubig on natural language processing (NLP) and large language models (LLMs).
I received my Ph.D. from The Ohio State University (OSU) where I was advised by Prof. Huan Sun and Prof. Yu Su. I completed my B.S. in Computer Science at Wuhan University.
Recent Papers
- [1] MMMU / MMMU-Pro / MMLU-Pro: benchmarks for multimodal and language model reasoning
- [2] On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models
- [3] Demystifying Long Chain-of-Thought Reasoning in LLMs
- [4] Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning
- [5] MAmmoTH2: Scaling Instructions from the Web
- [6] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization
Recent Talks and Media
- [NeurIPS 2025 Tutorial] The Science of Benchmarking [Slides]
- Rethinking LLM Reasoning [Slides]
- Learning to Reason with LLMs [Slides]
- [ACL 2025 Tutorial] Synthetic Data in the Era of LLMs [Slides]
- [Nature] How should we test AI for human-level intelligence? OpenAI's o3 electrifies quest
Last Updated: 12/2025