| CARVIEW |
Daniel Soudry
Welcome to my home page !
I am an associate professor and Schmidt Career Advancement Chair in AI, working in the Deparment of Electrical & Copmuter Engineering at the Technion, in the area of machine learning. I am especially interested in all aspects of neural networks and deep learning: see Research for my current research and Background for a more general background.
I am looking for highly motivated and excellent MSc/PhD students and Post-Docs with similar research interests to join our team! (More Info)
Short Bio
I did my post-doc (as a Gruss Lipper fellow) working with Prof. Liam Paninski in the Department of Statistics and the Center for Theoretical Neuroscience at Columbia University.
I did my Ph.D. (2008-2013, direct track) in the Network Biology Research Laboratory in the Department of Electrical & Computer Engineering at the Technion, Israel Institute of Technology, under the guidance of Prof. Ron Meir.
In 2008 I graduated summa cum laude with a B.Sc. in Electrical Engineering and a B.Sc. in Physics, after studying in the Technion since 2004.
Recorded Talks
Talks for the general audience:
- What do I do? A short (1:29) Promotional Video
- AI for Everyday Usage in Academia (Hebrew), “Mistaglim 2.0” zoom meeting, 2024.
- On the age of deep learning and the revolution in artificial intelligence (Hebrew), Rambam Staff meeting, 2019.
Talks that require a machine learning background:
- A youtuber I didn’t even know made a very nice professional movie on our paper FP4 All the Way: Fully Quantized Training of LLMs.
- On catastrophic forgetting in linear regression and the implicit bias of minima stability (English), SlowDNN, Abu-Dhabi, 2022.
- Resource Efficiency and Algorithmic Bias Control in Deep Learning (Hebrew), MLIS 2022.
- Algorithmic Bias Control in Deep Learning (English), Hebrew University, CS colloquium, 2020.
- Theory and Practice of Deep Neural Networks (English), Deep Learning and the Brain, 2019.
- Unsupervised Podcast (Hebrew), 2018.
- Improving Training Efficiency in Deep Learning (Hebrew), SIPL Annual Event, 2018.
- How “bad” local minima in neural network loss vanish with mild Overparameterization (English), Gatsby Tri-center meeting, 2017.
In the Press
- I was selected to The Marker’s “40 under 40” list.
- A short public relation movie on the collaboration with Intel.

News
--- September 18th 2025 ---
Two papers accepted to ALT 2026!
--- September 18th 2025 ---
Six papers accepted to NeurIPS 2025, two as spotlights!
--- May 1st 2025 ---
The paper “When Diffusion Models Memorize: Inductive Biases in Probability Flow of Minimum-Norm Shallow Neural Nets” has been accepted at ICML 2025.
--- January 11th 2025 ---
The paper “Scaling FP8 training to trillion-token LLMs” has been accepted at ICLR 2025, as a Spotlight presentation!
--- September 25st 2024 ---
Four papers accepted to NeurIPS, including one paper as a Spotlight presentation! (see publication list for details)
--- May 1st 2024 ---
The paper “How Uniform Random Weights Induce Non-uniform Bias - Typical Interpolating Neural Networks Generalize with Narrow Teachers” has been accepted at ICML 2024, as a Spotlight presentation!
--- January 24th 2024 ---
The paper “Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators” has been accepted at ICLR 2024.
--- January 24th 2024 ---
The paper “The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting - An Analytical Model” has been accepted at ICLR 2024.
--- September 21st 2023 ---
The paper “How do Minimum-Norm Shallow Denoisers Look in Function Space?” has been accepted at NeurIPS 2023.
... see all News
- ERC (Project : A-B-C-Deep,101039436)
- Intel (5 grants)
- Israeli Science Foundation (2018-2022)
- Israel Innovation Authority (2019-2022)
- AIGrant.org (2017)