| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sun, 30 Nov 2025 06:18:50 GMT
access-control-allow-origin: *
etag: W/"692be1ca-7593"
expires: Mon, 29 Dec 2025 08:01:53 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 1B84:3FD64F:87DF7E:98A5C7:69523319
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 07:51:53 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210082-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766994714.604312,VS0,VE219
vary: Accept-Encoding
x-fastly-request-id: 7ce12d892da7a84e4fd018e8f7a276c2363f720d
content-length: 8198
Noah Golowich
Preprints
Noah Golowich
I am a postdoctoral researcher at Microsoft Research, NYC. In 2026, I will join the computer science department at UT Austin as an Assistant Professor. I completed my PhD at MIT, where I was very fortunate to be advised by Constantinos Daskalakis and Ankur Moitra.
I will be recruiting PhD students and postdocs this coming application cycle (to start in Fall 2026). Please apply here.
My research focuses broadly on the theoretical foundations of modern AI. I am particularly interested in the role that computational constraints play in shaping our current and future toolkit of algorithms for machine learning and AI.
I will be recruiting PhD students and postdocs this coming application cycle (to start in Fall 2026). Please apply here.
My research focuses broadly on the theoretical foundations of modern AI. I am particularly interested in the role that computational constraints play in shaping our current and future toolkit of algorithms for machine learning and AI.
Contact information:
n$g at mit dot edu, replace the $ with z
Papers
Authors are in alphabetical order, unless indicated with ().Preprints
-
Sequences of Logits Reveal the Low Rank Structure of Language Models.
Noah Golowich, Allen Liu, and Abhishek Shetty.
-
The Coverage Principle: How Pre-training Enables Post-Training.
Fan Chen, Audrey Huang, Noah Golowich, Sadhika Malladi, Adam Block, Jordan T. Ash, Akshay Krishnamurthy, and Dylan J. Foster.
-
The Hidden Game Problem.
Gon Buzaglo, Noah Golowich, and Elad Hazan.
-
High-Dimensional Calibration from Swap Regret.
Maxwell Fishelson, Noah Golowich, Mehryar Mohri, and Jon Schneider.
To appear in NeurIPS 2025. (Oral presentation)
-
A Lower Bound on Swap Regret in Extensive-Form Games.
Constantinos Daskalakis, Gabriele Farina, Noah Golowich, Tuomas Sandholm, and Brian Hu Zhang.
-
The Role of Sparsity for Length Generalization in Transformers.
Noah Golowich, Samy Jelassi, David Brandfonbrener, Sham M. Kakade, and Eran Malach.
In ICML 2025.
-
Breaking the T^(2/3) Barrier for Sequential Calibration.
Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich, Robert Kleinberg, and Princewill Okoroafor.
In STOC 2025.
-
Edit Distance Robust Watermarks for Language Models.
Noah Golowich and Ankur Moitra.
In NeurIPS 2024.
-
Online Control in Population Dynamics.
Noah Golowich, Elad Hazan, Zhou Lu, Dhruv Rohatgi, and Y. Jennifer Sun.
In NeurIPS 2024.
-
Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised Learning.
Noah Golowich, Ankur Moitra, and Dhruv Rohatgi.
In FOCS 2024.
-
From External to Swap Regret 2.0: An Efficient Reduction and Oblivious Adversary for Large Action Spaces.
Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson, and Noah Golowich
In STOC 2024.
-
Smooth Nash Equilibria: Algorithms and Complexity.
Constantinos Daskalakis, Noah Golowich, Nika Haghtalab, and Abhishek Shetty.
In ITCS 2024.
-
Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles.
Noah Golowich, Ankur Moitra, and Dhruv Rohatgi.
In STOC 2024.
-
The Role of Inherent Bellman Error in Offline Reinforcement Learning with Linear Function Approximation.
Noah Golowich and Ankur Moitra.
In Reinforcement Learning Conference 2024.
-
Is Efficient PAC Learning Possible with an Oracle That Responds 'Yes' or 'No'?
Constantinos Daskalakis and Noah Golowich.
In COLT 2024.
-
Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions.
Noah Golowich and Ankur Moitra.
In COLT 2024.
-
Near-Optimal Learning and Planning in Separated Latent MDPs.
Fan Chen, Constantinos Daskalakis, Noah Golowich, and Alexander Rakhlin
In COLT 2024.
-
Hardness of Independent Learning and Sparse Equilibrium Computation in Markov Games.
Dylan J. Foster, Noah Golowich, and Sham Kakade.
In ICML 2023.
-
On the Complexity of Multi-Agent Decision Making: From Learning in Games to Partial Monitoring.
Dylan J. Foster, Dean P. Foster, Noah Golowich, and Alexander Rakhlin.
In COLT 2023.
My talk at the CanaDAM workshop on Learning and Games. [slides] [workshop] -
Tight guarantees for interactive decision making with the decision-estimation coefficient.
Dylan J. Foster, Noah Golowich, and Yanjun Han.
In COLT 2023.
My talk at the Simons Reunion on Learning and Games. [slides] [workshop]
-
The complexity of Markov equilibrium in stochastic games.
Constantinos Daskalakis, Noah Golowich, and Kaiqing Zhang.
In COLT 2023.
My talk at the Simons workshop on reinforcement learning and bandit learning. [talk video] [slides]
-
STay-ON-the-Ridge: Guaranteed Convergence to Local Minimax Equilibrium in Nonconvex-Nonconcave Games.
Constantinos Daskalakis, Noah Golowich, Stratis Skoulakis, and Manolis Zampetakis.
In COLT 2023. - Model-free reinforcement learning with the decision-estimation coefficient.
Dylan J. Foster, Noah Golowich, Jian Qian, Alexander Rakhlin, and Ayush Sekhari.
In NeurIPS 2023. -
Planning and Learning in Partially Observable Systems via Filter Stability.
Noah Golowich, Ankur Moitra, and Dhruv Rohatgi.
In STOC 2023. [conf]
-
Learning in observable POMDPs, without computationally intractable oracles.
Noah Golowich, Ankur Moitra, and Dhruv Rohatgi.
In NeurIPS 2022. [conf] -
Fast Rates for Nonparametric Online Learning: From Realizability to Learning in Games
Constantinos Daskalakis and Noah Golowich.
In STOC 2022. [conf] [talk video]
-
Near-Optimal No-Regret Learning for Correlated Equilibria in Multi-Player General-Sum Games.
Ioannis Anagnostides, Constantinos Daskalakis, Gabriele Farina, Maxwell Fishelson, Noah Golowich, and Tuomas Sandholm.
In STOC 2022. [conf]
-
Can Q-learning be improved with advice?
Noah Golowich and Ankur Moitra.
In COLT 2022. [conf] [slides]
-
Smoothed online learning is as easy as statistical learning.
Adam Block, Yuval Dagan, Noah Golowich, and Alexander Rakhlin
In COLT 2022. [conf] [slides]
-
Near-Optimal No-Regret Learning in General Games.
Constantinos Daskalakis, Maxwell Fishelson, and Noah Golowich.
In NeurIPS 2021 (Oral presentation). [conf]
My talk at the Simons workshop on Adversarial approaches in machine learning. [talk video]
-
Littlestone Classes are Privately Online Learnable.
Noah Golowich and Roi Livni.
In NeurIPS 2021 (Spotlight presentation). [conf] -
On Deep Learning with Label Differential Privacy.
Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, and Chiyuan Zhang.
In NeurIPS 2021. [conf] -
Differentially private nonparametric regression under a growth condition.
Noah Golowich.
In COLT 2021. [conf] [talk video] [slides] -
Sample-efficient proper PAC learning with approximate differential privacy.
Badih Ghazi, Noah Golowich, Ravi Kumar, and Pasin Manurangsi.
Extended abstract at PriML/PPML 2020 workshop (Oral presentation). [workshop link] [talk video] [slides]
In STOC 2021. [conf] [talk video]
My talk at MIT Algorithms & Complexity Seminar, March 2021. [talk video] [slides]
My talk at Boston-area DP seminar (BU, Harvard & Northeastern), February 2021. [slides] -
Near-tight closure bounds for Littlestone and threshold dimensions.
Badih Ghazi, Noah Golowich, Ravi Kumar, and Pasin Manurangsi.
In ALT 2021 (Best student paper award). [conf] [talk video] [slides] -
On the power of multiple anonymous messages.
Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, and Ameya Velingker.
Extended abstract at FORC 2020. [conf] [talk video] [slides]
In Eurocrypt 2021. [conf] [talk video]
My talk at MIT CIS seminar, December 2019. [slides] -
Tight last-iterate convergence rates for no-regret learning in multi-player games.
() Noah Golowich, Sarath Pattathil, and Constantinos Daskalakis.
In NeurIPS 2020. [conf] -
Independent Policy Gradient Methods for Competitive Reinforcement Learning.
Constantinos Daskalakis, Dylan J. Foster, and Noah Golowich.
In NeurIPS 2020. [conf] [slides] - Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems.
() Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, and Asuman Ozdaglar.
In COLT 2020. [conf] [talk video] [slides] -
Pure differentially private summation from anonymous messages.
Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, Rasmus Pagh, and Ameya Velingker.
In ITC 2020. [conf] - Round complexity of common randomness generation: the amortized setting.
Noah Golowich and Madhu Sudan.
In SODA 2020. [conf] [slides] -
A convergence analysis of gradient descent for deep linear neural networks.
Sanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu.
In ICLR 2019. [conf] - Communication-rounds tradeoffs for common randomness and secret key generation.
Mitali Bafna, Badih Ghazi, Noah Golowich, and Madhu Sudan.
In SODA 2019. [conf] [slides] - Machine learning for optimal economic design.
Paul Dütting, Zhe Feng, Noah Golowich, Harikrishna Narasimhan, and David C. Parkes.
In The Future of Economic Design, 2019. [journal] - Deep learning for multi-facility location mechanism design.
Noah Golowich, Harikrishna Narasimhan, and David C. Parkes.
In IJCAI 2018. [conf] - Size-independent sample complexity of neural networks.
Noah Golowich, Alexander Rakhlin, and Ohad Shamir.
In COLT 2018. [conf]
In Information and Inference, 2019. [journal] - Coloring chains for compression with uncertain priors.
Noah Golowich.
In The Electronic Journal of Combinatorics, 2018. [journal] -
The m-degenerate chromatic number of a digraph.
Noah Golowich.
In Discrete Mathematics, 2016. [journal] - Acyclic subgraphs of planar digraphs.
Noah Golowich and David Rolnick.
In The Electronic Journal of Combinatorics, 2016. [journal] - Resolving a conjecture on degree of regularity of linear homogeneous equations.
Noah Golowich.
In The Electronic Journal of Combinatorics, 2014. [journal] -
Degree of regularity of linear homogeneous equations.
Kavish Gandhi, Noah Golowich, and László M. Lovász.
In Journal of Combinatorics, 2014. [journal]
-
On Learning Parities with Dependent Noise.
Noah Golowich, Ankur Moitra, and Dhruv Rohatgi.
Lecture Notes
Below are some lecture notes of mine for courses I have taken.- 6.S977 - The Sum of Squares Method (taught by Sam Hopkins).
- 6.S891 - Approximate Counting and Sampling (taught by Kuikui Liu).
