| CARVIEW |
Description
An efficient and high-intensity bootcamp designed to teach you the fundamentals of deep learning as quickly as possible !
MIT's introductory program on deep learning methods with applications to natural language processing, computer vision, biology, and more! Students will gain foundational knowledge of deep learning algorithms, practical experience in building neural networks, and understanding of cutting-edge topics including large language models and generative AI. Program concludes with a project proposal competition with feedback from staff and panel of industry sponsors. Prerequisites assume calculus (i.e. taking derivatives) and linear algebra (i.e. matrix multiplication), we'll try to explain everything else along the way! Experience in Python is helpful but not necessary. Listeners are welcome!
Time and Location
Mon Jan 5 - Fri Jan 9, 2026
Every day 1-4pm ET
MIT Room 32-123
New lectures, competitions, & prizes!
Schedule
2026 edition coming soon!
Taught in-person at MIT — open-sourced to the world.
Intro to Deep Learning
Lecture 1
Jan. 5, 2026[Slides] [Video] coming soon!
Deep Sequence Modeling
Lecture 2
Jan. 5, 2026[Slides] [Video] coming soon!
Deep Learning in Python; Music Generation
Software Lab 1
[Code] coming soon!
Deep Computer Vision
Lecture 3
Jan. 6, 2026[Slides] [Video] coming soon!
Deep Generative Modeling
Lecture 4
Jan. 6, 2026[Slides] [Video] coming soon!
Facial Detection Systems
Software Lab 2
[Paper] [Code] coming soon!
Deep Reinforcement Learning
Lecture 5
Jan. 7, 2026[Slides] [Video] coming soon!
New Frontiers
Lecture 6
Jan. 7, 2026[Slides] [Video] coming soon!
Fine-Tune an LLM,
You Must!
You Must!
Software Lab 3
[Code] coming soon!
Guest Lecture
Lecture 7
Jan. 8, 2026[Slides] [Video] coming soon!
Guest Lecture
Lecture 8
Jan. 8, 2026[Slides] [Video] coming soon!
Final Project
Work on final projects
Guest Lecture
Lecture 9
Jan. 9, 2026[Slides] [Video] coming soon!
Guest Lecture
Lecture 10
Jan. 9, 2026[Slides] [Video] coming soon!
Project Presentations
Pitch your ideas, awards, and celebration!
Frequently Asked Questions
For any other questions please reach out to the staff at introtodeeplearning-staff@mit.edu.
In addition, everyone interested in taking the course (MIT or not; and in-person or not), should also register on the internal registration to receive updates.
After the MIT program, the content will be open-sourced to the world. Again, please sign up for the internal registration to receive updates when this occurs.
We are expecting very elementary knowledge of linear algebra and calculus. How to multiply matrices, take derivatives and apply the chain rule. Familiarity in Python is a big plus as well. The program will be beginner friendly since we have many registered students from outside of computer science.
If you would like to receive related updates and lecture materials please subscribe to our YouTube channel and sign up for our mailing list.
All materials are open-sourced to the world for free and are copyrighted under the MIT license. If you are an instructor and would like to use any materials from this program (slides, labs, code), you must add the following reference to each slide:
© Alexander Amini and Ava Amini
MIT Introduction to Deep Learning
IntroToDeepLearning.com
All materials are copyrighted and licensed under the MIT license. If you are an instructor and would like to use any materials from this program (slides, labs, code), you must add the following reference to each slide:
© Alexander Amini and Ava Amini
MIT 6.S191: Introduction to Deep Learning
IntroToDeepLearning.com
If you are an MIT student, postdoc, faculty, or affiliate and would like to become involved with this program please email introtodeeplearning-staff@mit.edu. We are always accepting new applications to join the program staff.
This class would not be possible without our amazing sponsors and has been sponsored by Google, IBM, NVIDIA, Microsoft, Amazon, LambdaLabs, Tencent AI, Ernst and Young, and Onepanel. If you are interested in becoming involved in this program as a sponsor please contact us at introtodeeplearning-staff@mit.edu.
Team
TAs and Staff
Victory Yinka-Banjo
Lead TA
Sadhana Lolla
Lead TA
Maxi Attiogbe
Teaching Assistant
David Chaudhari
Teaching Assistant
Shorna Alam
Teaching Assistant
Anirudh Valiveru
Teaching Assistant
Divya Nori
Teaching Assistant
Alex Lavaee
Teaching Assistant
Shreya Ravikumar
Teaching Assistant
Franklin Wang
Teaching Assistant
John Werner
Community & Strategy
Eva Xie
Teaching Assistant
We are always accepting new applications to join the program staff. If you are interested in becoming a Teaching Assistant (TA), please contact introtodeeplearning-staff@mit.edu
Sponsors
This program and delivery would not be possible without our amazing sponsors! If you are interested in becoming involved in this program as a sponsor please contact us at introtodeeplearning-staff@mit.edu
Copyright © MIT 6.S191. banner image
Introduction to Language Modeling
Peter Grabowski, Lead of Gemini Applied Research, Google
Talk Abstract
Want to get started with LLMs? This lecture will cover an introduction to language modeling and prompt engineering, example use cases and applications, and a discussion of common considerations for LLM usage (cost, efficiency, accuracy, bias).
Speaker Bio
Peter leads the Gemini Applied Research group, focused on developing fast, efficient, and scalable models in partnership with DeepMind, Search, Ads, Cloud, and other teams across Google. Prior to that, he led a group focused on Google's Enterprise AI, worked on making the Google Assistant better for Kids, and led the data integration / machine learning team at Nest. Peter loves to teach, and is a member of the faculty at UC Berkeley's School of Information, where he teaches courses focused on Deep Learning and Natural Language Processing.
Introduction to LLM Post-Training
Maxime Labonne, Head of Post-Training, Liquid AI
Talk Abstract
In this talk, we will cover the fundamentals of modern LLM post-training at various scales with concrete examples. High-quality data generation is at the core of this process, focusing on the accuracy, diversity, and complexity of the training samples. We will explore key training techniques, including supervised fine-tuning, preference alignment, and model merging. The lecture will delve into evaluation frameworks with their pros and cons for measuring model performance. We will conclude with an overview of emerging trends in post-training methodologies and their implications for the future of LLM development.
Speaker Bio
Maxime Labonne is Head of Post-Training at Liquid AI. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is a Google Developer Expert in AI/ML. He has made significant contributions to the open-source community, including the LLM Course, tutorials on fine-tuning, tools such as LLM AutoEval, and several state-of-the-art models like NeuralDaredevil. He is the author of the best-selling books “LLM Engineer’s Handbook” and “Hands-On Graph Neural Networks Using Python”.
AI to Optimize Biology
Ava Amini, Senior Research Scientist, Microsoft
Talk Abstract
The potential of AI in biology is immense, yet its success is contingent on interfacing effectively with wet-lab experimentation and remaining grounded in the system, structure, and physics of biology. I will share how, at Microsoft Research, we are developing new AI systems that help us better understand and design biology via generative design and interactive discovery. I will focus on Generative AI models for the design of novel and useful biomolecules, expanding our ability to engineer new proteins for therapeutic, biological, and industrial applications and beyond.
Speaker Bio
Ava Amini is a Senior Researcher at Microsoft, where she develops new AI technologies for precision biology and medicine. She completed her PhD in Biophysics at Harvard University and her BS in Computer Science and Molecular Biology at MIT and has been recognized by the National Academy of Engineering, the National Science Foundation, TEDx, Venture Beats, and the Association of MIT Alumnae, among others, for her research. Ava is passionate about AI education and outreach -- she is a lead organizer and instructor for MIT Introduction to Deep Learning, where she has taught AI to 1000s of students in-person and over 100,000 globally registered students online, garnering more than 11 million online lecture views, and served as a co-founder and director of MomentumAI, which taught all-expenses-paid education programs for high schoolers to learn AI.
A Hippocratic Oath, for your AI
Douglas Blank, Head of Research, Comet ML
Talk Abstract
While Deep Learning has achieved remarkable advancements, I believe its deployment requires a shift in perspective. Just as the Hippocratic Oath guides medical practice, a fundamental ethical framework is crucial for responsible AI deployment. This talk delves into the critical question: Can your AI system adhere to the principle of 'Do no Harm'? We will explore the risks associated with releasing your AI project into the wild, while considering the ethical implications alongside technical advancements.
Speaker Bio
Douglas Blank is the Head of Research at Comet ML, where he works with many teams, including Engineering, Customer Success, and Product Design. Prior to Comet, Douglas completed his PhD in Computer Science and Cognitive Science from Indiana University, Bloomington. His thesis explored the training of neural networks to make analogies. He taught courses in Robotics, Cognitive Science, and Computer Science at Bryn Mawr College, where he created a research agenda called "Developmental Robotics," focusing on using Deep Learning as the foundation for a mentally developing robot.