| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 25 Jul 2025 04:01:05 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"68830181-291e"
expires: Mon, 29 Dec 2025 20:19:02 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 5B5A:1F53DD:944FD8:A660A9:6952DFDD
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 22:12:48 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210022-BOM
x-cache: HIT
x-cache-hits: 0
x-timer: S1767046369.616127,VS0,VE204
vary: Accept-Encoding
x-fastly-request-id: f1b1ab5306d2761a12c41d9448e2190e079612e9
content-length: 3982
Liu Yang's Homepage
In 2020, I earned my B.E. in Computer Science from Xi’an Jiaotong University. During my third year of undergraduate studies, I participated in an exchange program at the University of California, Berkeley. While there, I was a member of Dr. Stella Yu’s group at ICSI.
Conference Reviewer ICML (2022, 2024, 2025), Neurips (2024), ICLR (2024, 2025)
Student Organizer of SILO, MLOPT seminar
PhD Candidate
University of Wisconsin-Madison
liu . yang @ wisc . edu
About Me
I am a PhD candidate in computer science at the University of Wisconsin–Madison. I am very fortunate to be advised by Prof. Robert D. Nowak, Prof. Dimitris Papailiopoulos and Prof. Kangwook Lee. My research focuses on the intersection of machine learning and optimization, with a recent emphasis on understanding in-context learning in controlled settings.In 2020, I earned my B.E. in Computer Science from Xi’an Jiaotong University. During my third year of undergraduate studies, I participated in an exchange program at the University of California, Berkeley. While there, I was a member of Dr. Stella Yu’s group at ICSI.
Publications and Preprints
Transformers and In-Context Learning
Task Vectors in In-Context Learning: Emergence, Formation, and Benefits
Liu Yang, Ziqian Lin, Kangwook Lee, Dimitris Papailiopoulos, Robert D. Nowak
COLM'25 | summary
Liu Yang, Ziqian Lin, Kangwook Lee, Dimitris Papailiopoulos, Robert D. Nowak
COLM'25 | summary
How Well Can Transformers Emulate In-context Newton's Method?
Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos, Jason D. Lee
AISTATS'25 | code
Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos, Jason D. Lee
AISTATS'25 | code
Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios G Chrysos, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
ICML'25 (Spotlight) | summary
Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios G Chrysos, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
ICML'25 (Spotlight) | summary
Looped Transformers are Better at Learning Learning Algorithms
Liu Yang, Kangwook Lee, Robert D. Nowak, Dimitris Papailiopoulos
ICLR'24 | code | 45-Minute Talk | summary
Liu Yang, Kangwook Lee, Robert D. Nowak, Dimitris Papailiopoulos
ICLR'24 | code | 45-Minute Talk | summary
An Empirical Study on the Power of Future Prediction in Partially Observable Environments
Jeongyeol Kwon*, Liu Yang*, Robert Nowak, Josiah Hanna
Arxiv'24
Jeongyeol Kwon*, Liu Yang*, Robert Nowak, Josiah Hanna
Arxiv'24
Generative Retrieval
Unifying Generative and Dense Retrieval for Sequential Recommendation
Liu Yang, Fabian Paischer, Kaveh Hassani, Jiacheng Li, Shuai Shao, Zhang Gabriel Li, Yun He, Xue Feng, Nima Noorshams, Sem Park, Bo Long, Robert D Nowak, Xiaoli Gao, Hamid Eghbalzadeh
TMLR'25 | summary
Liu Yang, Fabian Paischer, Kaveh Hassani, Jiacheng Li, Shuai Shao, Zhang Gabriel Li, Yun He, Xue Feng, Nima Noorshams, Sem Park, Bo Long, Robert D Nowak, Xiaoli Gao, Hamid Eghbalzadeh
TMLR'25 | summary
Preference Discerning with LLM-Enhanced Generative Retrieval
Fabian Paischer, Liu Yang, Linfeng Liu, Shuai Shao, Kaveh Hassani, Jiacheng Li, Ricky Chen, Zhang Gabriel Li, Xiaoli Gao, Wei Shao, Xue Feng, Nima Noorshams, Sem Park, Bo Long, Hamid Eghbalzadeh
TMLR'25 | summary
Fabian Paischer, Liu Yang, Linfeng Liu, Shuai Shao, Kaveh Hassani, Jiacheng Li, Ricky Chen, Zhang Gabriel Li, Xiaoli Gao, Wei Shao, Xue Feng, Nima Noorshams, Sem Park, Bo Long, Hamid Eghbalzadeh
TMLR'25 | summary
Sparse Training
Rare Gems: Finding Lottery Tickets at Initialization
Kartik Sreenivasan*, Jy-yong Sohn*, Liu Yang, Matthew Grinde, Alliot Nagle, Hongyi Wang, Eric Xing, Kangwook Lee, Dimitris Papailiopoulos
NeurIPS'22 | code
Kartik Sreenivasan*, Jy-yong Sohn*, Liu Yang, Matthew Grinde, Alliot Nagle, Hongyi Wang, Eric Xing, Kangwook Lee, Dimitris Papailiopoulos
NeurIPS'22 | code
A Better Way to Decay: Proximal Gradient Training Algorithms for Neural Nets
Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, Robert D. Nowak
NeurIPS'22 OPT Workshop | code
Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, Robert D. Nowak
NeurIPS'22 OPT Workshop | code
Manifold Learning
Flow-based Generative Models for Learning Manifold to Manifold Mappings
Xingjian Zhen, Rudrasis Chakraborty, Liu Yang, Vikas Singh
AAAI'21
Xingjian Zhen, Rudrasis Chakraborty, Liu Yang, Vikas Singh
AAAI'21
An "augmentation-free" rotation invariant classification scheme on point-cloud and its application to neuroimaging
Liu Yang, Rudrasis Chakraborty
ISBI'20
Liu Yang, Rudrasis Chakraborty
ISBI'20
A GMM based algorithm to generate point-cloud and its application to neuroimaging
Liu Yang, Rudrasis Chakraborty
ISBI'20 (Workshop)
Liu Yang, Rudrasis Chakraborty
ISBI'20 (Workshop)
Intrinsic Grassmann Averages for Online Linear, Robust and Nonlinear Subspace Learning
Rudrasis Chakraborty, Liu Yang, Søren Hauberg and Baba C. Vemuri
TPAMI
Rudrasis Chakraborty, Liu Yang, Søren Hauberg and Baba C. Vemuri
TPAMI
POIRot: A rotation invariant omni-directional pointnet
Liu Yang, Rudrasis Chakraborty, and Stella X. Yu
Arxiv'19
Liu Yang, Rudrasis Chakraborty, and Stella X. Yu
Arxiv'19
Work Experiences
- Meta (2024 summer) with Hamid Eghbalzadeh and Xiaoli Gao, on understanding the generative retrieval's limitation in sequential recommendation system.
- Meta (2023 fall) with Minhui Huang, on exploring multi-task architectural design of the foundation model in ads ranking system.
Service
Student Organizer of SILO, MLOPT seminar