| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 08 Aug 2025 05:12:51 GMT
access-control-allow-origin: *
etag: W/"68958753-5d27"
expires: Sun, 28 Dec 2025 12:05:51 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: DF70:2C10E1:796E7C:881698:69511AC6
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 11:55:51 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210080-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766922951.962690,VS0,VE232
vary: Accept-Encoding
x-fastly-request-id: 9256363005c9eed46fe3cd0b58f6f37d823c1b09
content-length: 6020
Chengxu Zhuang
Biography
I am an AI Research Scientist at Meta. I worked at OpenAI on ChatGPT Advanced Voice Mode. I was previously an ICoN Postdoctoral Fellow at MIT, working with Ev Fedorenko and Jacob Andreas. I obtained my Ph.D. degree from Stanford, advised by Daniel Yamins. I am interested in both understanding brains and developing more effective AI models.
Interests
- Natural Language Processing
- Language Acquisition
- Computer Vision
- Computational Neuroscience
- Artificial Intelligence
- Deep Learning
Education
-
Postdoc, 2022 to 2024
MIT
-
Ph.D. in Psychology, 2016 to 2022
Stanford University
-
B.E. in Electronic Engineering, 2011 to 2016
Tsinghua University
-
B.S. in Mathematics (second major), 2011 to 2016
Tsinghua University
Recent Publications
Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas
(2024).
Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling.
ACL 2024, Findings.
Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas
(2024).
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes.
NAACL 2024, Oral presentation, Best Paper Award.
Bria Long, Sarah Goodin, George Kachergis, Virginia A Marchman, Samaher F Radwan, Robert Z Sparks, Violet Xiang, Chengxu Zhuang, Oliver Hsu, Brett Newman, Daniel LK Yamins, Michael C Frank
(2023).
The BabyView camera: Designing a new head-mounted camera to capture children’s early social and visual environments.
Behavior Research Methods.
Chengxu Zhuang, Violet Xiang, Yoon Bai, Xiaoxuan Jia, Nicholas Turk-Browne, Kenneth Norman, James DiCarlo, Daniel Yamins
(2022).
How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning?.
NeurIPS 2022 Datasets and Benchmarks Track.
Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman
(2021).
Conditional Negative Sampling for Contrastive Learning of Visual Representations..
ICLR 2021.
Chengxu Zhuang, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo, Daniel Yamins
(2021).
Unsupervised neural network models of the ventral visual stream.
PNAS.
Chengxu Zhuang, Tianwei She, Alex Andonian, Max Sobol Mark, Daniel Yamins
(2020).
Unsupervised Learning from Video with Deep Neural Embeddings.
CVPR 2020.
Chengxu Zhuang, Alex Lin Zhai, Daniel Yamins
(2019).
Local Aggregation for Unsupervised Learning of Visual Embeddings.
ICCV 2019, Oral presentation, Best Paper Award Nomination.
Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel Yamins
(2018).
Flexible Neural Representation for Physics Prediction.
NeurIPS 2018.
Chengxu Zhuang, Jonas Kubilius, Mitra Hartmann, Daniel Yamins
(2017).
Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System.
NIPS 2017, Oral presentation.
Teaching
I was a teaching assistant for the following courses at Stanford:
- PSYCH252: Statistical Methods for Behavioral and Social Sciences, Winter 2021
- PSYCH251: Experimental Methods, Autumn 2019
- PSYCH249 / CS375: Large-Scale Neural Network Models for Neuroscience, Autumn 2018
- PSYCH253: High-Dimensional Methods for Behavioral and Neural Data, Spring 2018, Spring 2019, Spring 2021