| CARVIEW |
Sinho Chewi's Website

I am an Assistant Professor of Statistics and Data Science at Yale University.
Past Visits
I received my B.S. in Engineering Mathematics and Statistics from the University of California, Berkeley in 2018, and my PhD in Mathematics and Statistics from the Massachusetts Institute of Technology in 2023, advised by Philippe Rigollet. In Fall 2021, I participated in the Simons Institute program on Geometric Methods in Optimization and Sampling and co-organized (with Kevin Tian) a working group on the complexity of sampling. In Spring 2022, I visited Jonathan Niles-Weed at New York University. In Summer 2022, I was a research intern at Microsoft Research, supervised by Sébastien Bubeck and Adil Salim. In Fall 2023 and Spring 2024, I was a postdoctoral researcher at the Institute for Advanced Study.
Books
I am currently writing a book on the complexity of log-concave sampling. You can read the current draft here. (The book is currently under reconstruction as I prepare edits this summer.)
Supplementary material can be found here.
Any feedback is appreciated!
Last Updated: August 29, 2025
Jonathan Niles-Weed, Philippe Rigollet, and I wrote a monograph on statistical optimal transport, which grew out of lectures given at the École d’Été de Probabilités de Saint-Flour (Saint-Flour Probability Summer School) by Philippe when he was invited as a lecturer there in 2019. You can find it on arXiv and on Springer Nature. [.bib]
Research
I am broadly interested in the mathematics of machine learning and statistics. My work focuses on applications of optimal transport to computational problems arising in these fields, such as log-concave sampling (see my book draft above) and variational inference (Lambert et al. (2023); Diao et al. (2023); Jiang, Chewi, and Pooladian (2024)).
Publications
- Theory and computation for structured variational inference. Shunan Sheng, Bohan Wu, Bennett Zhu, Sinho Chewi, and Aram-Alexandre Pooladian. November 2025. [arXiv] [.bib]
- Shifted composition II: shift Harnack inequalities and curvature upper bounds. Jason Altschuler and Sinho Chewi. November 2025. [arXiv] [IZS 2024] [IEEE Transactions on Information Theory] [Slides] [.bib]
- Sublinear iterations can suffice even for DDPMs. Matthew Zhang, Stephen Huan, Jerry Huang, Nicholas Boffi, Sitan Chen, and Sinho Chewi. November 2025. [arXiv] [Slides] [.bib]
- Stability of the Kim–Milman flow map. Sinho Chewi, Aram-Alexandre Pooladian, and Matthew Zhang. November 2025. [arXiv] [.bib]
- Algorithms for mean-field variational inference via polyhedral optimization in the Wasserstein space. Yiheng Jiang, Sinho Chewi, and Aram-Alexandre Pooladian. August 2025. [arXiv] [COLT 2024] [Foundations of Computational Mathematics] [Slides] [.bib]
- Gaussian mixture layers for neural networks. Sinho Chewi, Philippe Rigollet, and Yuling Yan. August 2025. [arXiv] [.bib]
- Shifted composition IV: toward ballistic acceleration for log-concave sampling. Jason Altschuler, Sinho Chewi, and Matthew Zhang. June 2025. [arXiv] [Slides] [.bib]
- DDPM score matching and distribution learning. Sinho Chewi, Alkis Kalavasis, Anay Mehrotra, and Omar Montasser. April 2025. [arXiv] [ICLR 2025 DeLTA Workshop (Best Short Paper Award)] [Deep Learning Theory Workshop @ RIKEN AIP (Video)] [Slides] [.bib]
- Shifted composition III: local error framework for KL divergence. Jason Altschuler and Sinho Chewi. December 2024. [arXiv] [Online Monte Carlo Seminar (Video)] [Slides] [.bib]
- The ballistic limit of the log-Sobolev constant equals the Polyak–Łojasiewicz constant. Sinho Chewi and Austin Stromme. November 2024. [arXiv] [.bib]
- Shifted composition I: Harnack and reverse transport inequalities. Jason Altschuler and Sinho Chewi. October 2024. [arXiv] [IZS 2024] [IEEE Transactions on Information Theory] [Slides] [.bib]
- Uniform-in-N log-Sobolev inequality for the mean-field Langevin dynamics with convex energy. Sinho Chewi, Atsushi Nitanda, and Matthew Zhang. September 2024. [arXiv] [Slides] [.bib]
- Query lower bounds for log-concave sampling. Sinho Chewi, Jaume de Dios Pont, Jerry Li, Chen Lu, and Shyam Narayanan. August 2024. [arXiv] [FOCS 2023] [Journal of the ACM] [Slides] [.bib]
- Analysis of Langevin Monte Carlo from Poincaré to log-Sobolev. Sinho Chewi, Murat Erdogdu, Mufan Li, Ruoqi Shen, and Matthew Zhang. July 2024. [arXiv] [COLT 2022 (Extended Abstract)] [Foundations of Computational Mathematics] [.bib]
- Fast parallel sampling under isoperimetry. Nima Anari, Sinho Chewi, and Thuy-Duong Vuong. July 2024. [arXiv] [COLT 2024] [.bib]
- Sampling from the mean-field stationary distribution. Yunbum Kook, Matthew Zhang, Sinho Chewi, Murat Erdogdu, and Mufan Li. July 2024. [arXiv] [COLT 2024] [.bib]
- Faster high-accuracy log-concave sampling via algorithmic warm starts. Jason Altschuler and Sinho Chewi. June 2024. [arXiv] [FOCS 2023] [Journal of the ACM] [Slides] [.bib]
- Learning threshold neurons via the "edge of stability". Kwangjun Ahn, Sébastien Bubeck, Sinho Chewi, Yin Tat Lee, Felipe Suárez-Colmenares, and Yi Zhang. December 2023. [arXiv] [NeurIPS 2023] [.bib]
- The probability flow ODE is provably fast. Sitan Chen, Sinho Chewi, Holden Lee, Yuanzhi Li, Jianfeng Lu, and Adil Salim. December 2023. [arXiv] [NeurIPS 2023] [Slides] [.bib]
- An entropic generalization of Caffarelli's contraction theorem via covariance inequalities. Sinho Chewi and Aram-Alexandre Pooladian. November 2023. [arXiv] [Comptes Rendus Mathématique] [.bib]
- The entropic barrier is n-self-concordant. Sinho Chewi. September 2023. [arXiv] [GAFA Seminar Notes] [.bib]
- Forward-backward Gaussian variational inference via JKO in the Bures–Wasserstein space. Michael Diao, Krishnakumar Balasubramanian, Sinho Chewi, and Adil Salim. July 2023. [arXiv] [ICML 2023] [Slides] [.bib]
- Improved discretization analysis for underdamped Langevin Monte Carlo. Matthew Zhang, Sinho Chewi, Mufan Li, Krishnakumar Balasubramanian, and Murat Erdogdu. July 2023. [arXiv] [COLT 2023] [.bib]
- Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru Zhang. May 2023. [arXiv] [NeurIPS 2022 Workshop on Score-Based Methods] [ICLR 2023 (Notable Top 5%)] [Georgia Tech ARC Colloquium (Video)] [Poster] [Slides] [.bib]
- Fisher information lower bounds for sampling. Sinho Chewi, Patrik Gerber, Holden Lee, and Chen Lu. February 2023. [arXiv] [ALT 2023] [Slides] [.bib]
- On the complexity of finding stationary points of smooth functions in one dimension. Sinho Chewi, Sébastien Bubeck, and Adil Salim. February 2023. [arXiv] [ALT 2023 (Best Student Paper)] [.bib]
- Gaussian discrepancy: a probabilistic relaxation of vector balancing. Sinho Chewi, Patrik Gerber, Philippe Rigollet, and Paxton Turner. December 2022. [arXiv] [Discrete Applied Mathematics] [.bib]
- Variational inference via Wasserstein gradient flows. Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, and Philippe Rigollet. December 2022. [arXiv] [NeurIPS 2022] [UMass Amherst Reading Seminar on Mathematics of Machine Learning (Video)] [Poster] [Slides] [.bib]
- Improved analysis for a proximal algorithm for sampling. Yongxin Chen, Sinho Chewi, Adil Salim, and Andre Wibisono. July 2022. [arXiv] [COLT 2022] [COLT 2022 (Video)] [Georgia Tech ARC Colloquium (Video)] [Slides] [.bib]
- The query complexity of sampling from strongly log-concave distributions in one dimension. Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, and Philippe Rigollet. July 2022. [arXiv] [COLT 2022] [.bib]
- Towards a theory of non-log-concave sampling: first-order stationarity guarantees for Langevin Monte Carlo. Krishnakumar Balasubramanian, Sinho Chewi, Murat Erdogdu, Adil Salim, and Matthew Zhang. July 2022. [arXiv] [COLT 2022] [Slides] [.bib]
- Rejection sampling from shape-constrained distributions in sublinear time. Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, and Philippe Rigollet. March 2022. [arXiv] [AISTATS 2022] [.bib]
- Averaging on the Bures–Wasserstein manifold: dimension-free convergence of gradient descent. Jason Altschuler, Sinho Chewi, Patrik Gerber, and Austin Stromme. December 2021. [arXiv] [NeurIPS 2021 (Spotlight)] [NeurIPS 2021 (Video)] [Slides] [.bib]
- Efficient constrained sampling via the mirror-Langevin algorithm. Kwangjun Ahn and Sinho Chewi. December 2021. [arXiv] [NeurIPS 2021] [Poster] [.bib]
- Dimension-free log-Sobolev inequalities for mixture distributions. Hong-Bin Chen, Sinho Chewi, and Jonathan Niles-Weed. December 2021. [arXiv] [Journal of Functional Analysis] [Slides] [.bib]
- Optimal dimension dependence of the Metropolis-adjusted Langevin algorithm. Sinho Chewi, Chen Lu, Kwangjun Ahn, Xiang Cheng, Thibaut Le Gouic, and Philippe Rigollet. August 2021. [arXiv] [COLT 2021] [Sampling Algorithms and Geometries on Probability Distributions (Video)] [Poster] [Slides] [.bib]
- Fast and smooth interpolation on Wasserstein space. Sinho Chewi, Julien Clancy, Thibaut Le Gouic, Philippe Rigollet, George Stepaniants, and Austin Stromme. March 2021. [arXiv] [AISTATS 2021] [.bib]
- Exponential ergodicity of mirror-Langevin diffusions. Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet, and Austin Stromme. December 2020. [arXiv] [NeurIPS 2020] [Slides] [.bib]
- SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence. Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, and Philippe Rigollet. December 2020. [arXiv] [NeurIPS 2020] [Slides] [.bib]
- Gradient descent algorithms for Bures–Wasserstein barycenters. Sinho Chewi, Tyler Maunu, Philippe Rigollet, and Austin Stromme. July 2020. [arXiv] [COLT 2020] [COLT 2020 (Video)] [Optimal Transport: Regularization and Applications 2020 (Video)] [TGDA@OSU 2020 (Video)] [Slides] [.bib]
- Matching observations to distributions: efficient estimation via sparsified Hungarian algorithm. Sinho Chewi, Forest Yang, Avishek Ghosh, Abhay Parekh, and Kannan Ramchandran. September 2019. [arXiv] [Allerton 2019] [.bib]
- A combinatorial proof of a formula of Biane and Chapuy. Sinho Chewi and Venkat Anantharam. March 2018. [arXiv] [E-JC] [.bib]
PhD thesis: An optimization perspective on log-concave sampling and beyond. 2023.
Teaching
In Fall 2025, I am teaching S&DS 2410/5410: Probability Theory. In Spring 2026, I will teach S&DS 4320/6320: Advanced Optimization Techniques.
Past Courses
- S&DS 4320/6320: Advanced Optimization Techniques (Sp25; Lecture Notes)
- S&DS 605: Sampling and Optimal Transport (Fa24)
Other
Click here to find the notes I took and courses I taught during my undergraduate and graduate studies.