| CARVIEW |
Brian Bullins
I am an assistant professor in the Department of Computer Science at Purdue University.
My research interests lie at the intersection of optimization and machine learning, both in theory and practice. In particular, I have worked on improving matrix estimation techniques for faster higher-order methods for convex and non-convex optimization, in both sequential and distributed settings.
Previously, I was a research assistant professor at the Toyota Technological Institute at Chicago (TTIC). I received my PhD in computer science from Princeton University in 2019, where I had the great fortune of being advised by Elad Hazan. Before that, I studied computer science and math at Duke University as a Benjamin N. Duke Scholar.
Google Scholar • Publications • Teaching
Email: {first initial}+{last name}@purdue.edu
Recent News
- [9/2025] Our paper on balancing gradient and Hessian queries in non-convex optimization has been accepted to NeurIPS 2025
- [5/2025] Our paper on faster acceleration for steepest descent has been accepted to COLT 2025
- [5/2025] Two papers (including one Oral Presentation) have been accepted to ICML 2025
- [5/2025] Area Chair for NeurIPS 2025
- [1/2025] Our paper on tight lower bounds for asymmetric high-order smooth and uniformly convex optimization has been accepted to ICLR 2025 (Oral Presentation)
- [1/2025] Area Chair for ICML 2025
- [9/2024] We are grateful to the National Science Foundation for their support of our research — thanks NSF!
Manuscripts
-
Convex optimization with p-norm oracles
with Deeksha Adil, Arun Jambulapati, and Aaron Sidford
Manuscript, under submission
-
Beyond first-order methods for non-convex non-concave min-max optimization
with Abhijeet Vyas
Manuscript, under submission
Publications
-
Balancing Gradient and Hessian Queries in Non-Convex Optimization
with Deeksha Adil, Aaron Sidford, and Chenyi Zhang
To appear in Neural Information Processing Systems (NeurIPS), 2025
-
Stacey: Promoting Stochastic Steepest Descent via Accelerated lp-Smooth Nonconvex Optimization
with Xinyu Luo, Cedar Site Bai, Bolian Li, Petros Drineas, and Ruqi Zhang
International Conference on Machine Learning (ICML), 2025
-
Model Immunization from a Condition Number Perspective
with Amber Yijia Zheng, Cedar Site Bai, and Raymond A. Yeh
International Conference on Machine Learning (ICML), 2025
(Oral Presentation)
-
Faster Acceleration for Steepest Descent
with Cedar Site Bai
Conference on Learning Theory (COLT), 2025
-
Tight Lower Bounds under Asymmetric High-Order Hölder Smoothness and Uniform Convexity
with Cedar Site Bai
International Conference on Learning Representations (ICLR), 2025
(Oral Presentation)
-
Local Composite Saddle Point Optimization
with Cedar Site Bai
International Conference on Learning Representations (ICLR), 2024
-
Competitive Gradient Optimization
with Abhijeet Vyas and Kamyar Azizzadenesheli
International Conference on Machine Learning (ICML), 2023
-
Variance-Reduced Conservative Policy Iteration
with Naman Agarwal and Karan Singh
Conference on Algorithmic Learning Theory (ALT), 2023
-
Towards Optimal Communication Complexity in Distributed Non-Convex Optimization
with Kumar Kshitij Patel, Lingxiao Wang, Blake Woodworth, and Nati Srebro
Neural Information Processing Systems (NeurIPS), 2022
-
Higher-order methods for convex-concave min-max optimization and monotone variational inequalities
with Kevin A. Lai
SIAM Journal on Optimization 32, no. 3 (2022): 2208-2229
-
A Stochastic Newton Algorithm for Distributed Convex Optimization
with Kumar Kshitij Patel, Ohad Shamir, Nathan Srebro, and Blake Woodworth
Neural Information Processing Systems (NeurIPS), 2021
-
Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization
with Deeksha Adil and Sushant Sachdeva
Neural Information Processing Systems (NeurIPS), 2021
-
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
with Blake Woodworth, Ohad Shamir, and Nathan Srebro
Conference on Learning Theory (COLT), 2021
(Best Paper Award)
-
Almost-linear-time Weighted lp-norm Solvers in Slightly Dense Graphs via Sparsification
with Deeksha Adil, Rasmus Kyng, and Sushant Sachdeva
International Colloquium on Automata, Languages, and Programming (ICALP), 2021
-
Adaptive regularization with cubics on manifolds
with Naman Agarwal, Nicolas Boumal, and Coralia Cartis
Mathematical Programming, 188(1):85--134, 2021
-
Is Local SGD Better than Minibatch SGD?
with Blake Woodworth, Kumar Kshitij Patel, Sebastian U. Stich,
Zhen Dai, H. Brendan McMahan, Ohad Shamir, and Nathan SrebroInternational Conference on Machine Learning (ICML), 2020
-
Highly smooth minimization of non-smooth problems
Conference on Learning Theory (COLT), 2020
This work is partially a merging of https://arxiv.org/abs/1812.10349 and https://arxiv.org/abs/1906.01621
-
Efficient Higher-Order Optimization for Machine Learning
PhD Thesis, Princeton University. 2019
-
Online Control with Adversarial Disturbances
with Naman Agarwal, Elad Hazan, Sham Kakade, and Karan Singh
International Conference on Machine Learning (ICML), 2019
-
Efficient Full-Matrix Adaptive Regularization
with Naman Agarwal, Xinyi Chen, Elad Hazan, Karan Singh, Cyril Zhang, and Yi Zhang
International Conference on Machine Learning (ICML), 2019
-
Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning
with Elad Hazan, Adam Kalai, and Roi Livni
Conference on Algorithmic Learning Theory (ALT), 2019
-
Not-So-Random Features
with Cyril Zhang and Yi Zhang
International Conference on Learning Representations (ICLR), 2018
-
Finding Approximate Local Minima Faster than Gradient Descent
with Naman Agarwal, Zeyuan Allen-Zhu, Elad Hazan, and Tengyu Ma
Symposium on Theory of Computing (STOC), 2017
-
Second-Order Stochastic Optimization for Machine Learning in
Linear Time
with Naman Agarwal and Elad Hazan
Journal of Machine Learning Research (JMLR), 18(116):1−40, 2017
2018 INFORMS Optimization Society Student Paper Prize, Honorable Mention
-
The Limits of Learning with Missing Data
with Elad Hazan and Tomer Koren
Neural Information Processing Systems (NeurIPS), 2016
-
Spectral properties of modularity matrices
with Marianna Bolla, Sorathan Chaturapruek, Shiwen Chen, and Katalin Friedl
Linear Algebra and its Applications, 473:359–376, 2015
Other publications
-
Optimal Methods for Higher-Order Smooth Monotone Variational Inequalities
with Deeksha Adil, Arun Jambulapati, and Sushant Sachdeva
Teaching
- Fall 2025: CS57100 — Artificial Intelligence
- Fall 2024: CS57100 — Artificial Intelligence
- Spring 2024: CS47100 — Introduction to Artificial Intelligence
- Fall 2023: CS57100 — Artificial Intelligence
- Spring 2023: CS50023 — Data Engineering I, CS50024 — Data Engineering II
- Fall 2022: CS59200 — Distributed Optimization for Machine Learning