CARVIEW |
Class is in session
Mastering LLMs For Developers & Data Scientists
3 Weeks
·Cohort-based Course
An online course for everything LLMs.
Class is in session
Mastering LLMs For Developers & Data Scientists
3 Weeks
·Cohort-based Course
An online course for everything LLMs.
Course overview
👉 SEE OUR NEW COURSE - AI Evals For Engineers & PMs: https://evals.info 👈
This course is $20 only because Maven doesn't allow free courses.
Build skills to be effective with LLMs
---
This started as an LLM fine-tuning course. It organically grew into a learning event with world-class speakers on a broad range of LLM topics. The original fine-tuning course is still here as a series of workshops. But there are now many self-contained talks and office hours from experts on many Generative AI topics.
All materials + recordings are available instantly, on demand. There are 11 talks and 4 workshops (and growing) in addition to office hours.
THIS IS A PAST COURSE THAT WE HAVE MADE NEARLY FREE ($20), ALL TALKS ARE RECORDED AND ACCESSIBLE THROUGH MAVEN FOR LIFE. WE ARE GIFTING THIS COURSE TO THE COMMUNITY. SEE: https://hamel.dev/blog/posts/course/ There is no active support, office hours, or ability to ask questions from instructors.
Conference Talks
------------------------------
Jeremy Howard: Co-Founder Answer.AI & Fast.AI
- Build Applications For LLMs in Python
Sophia Yang: Head of Developer Relations, Mistral AI
- Best Practices For Fine Tuning Mistral
Simon Willison: Creator of Datasette, co-creator of Django, PSF Board Member
- Language models on the command-line
JJ Allaire: CEO, Posit (formerly RStudio) & Researcher for the UK AI Safety Institute
- Inspect, An OSS framework for LLM evals
Wing Lian: Creator of Axolotl library for LLM fine-tuning
- Fine-Tuning w/Axolotl
Mark Saroufim and Jane Xu: PyTorch developers @ Meta
- Slaying OOMs with PyTorch FSDP and torchao
Jason Liu: Creator of Instructor
- Systematically improving RAG applications
Paige Bailey: DevRel Lead, GenAI, Google
- When to Fine-Tune?
Emmanuel Ameisen: Research Engineer, Anthropic
- Why Fine-Tuning is Dead
Hailey Schoelkopf: research scientist, Eleuther AI, maintainer, LM Evaluation Harness
- A Deep Dive on LLM Evaluation
Johno Whitaker: R&D at AnswerAI
- Fine-Tuning Napkin Math
John Berryman: Author of O'Reilly Book Prompt Engineering for LLMs
- Prompt Eng Best Practices
Ben Clavié: R&D at AnswerAI
- Beyond the Basics of RAG
Abhishek Thakur leads AutoTrain at HuggingFace
- Train (almost) any llm model using 🤗 Autotrain
Kyle Corbitt is currently building OpenPipe
- From prompt to model: fine-tuning when you've already deployed LLMs in prod
Ankur Goyal: CEO and Founder at Braintrust
- LLM Eval For Text2SQL
Freddy Boulton: Software Engineer at 🤗
- Let's Go, Gradio!
Jo Bergum: Distinguished Engineer at Vespa
- Back to basics for RAG
Fine-Tuning Course
---------------------------
Run an end-to-end LLM fine-tuning project with modern tools and best practices. Four workshops guide you through productionizing LLMs, including evals, fine-tuning and serving.
Workshop 1: Determine when (and when not) to fine-tune an LLM
Workshop 2: Train your first fine-tuned LLM with Axolotl
Workshop 3: Set up instrumentation and evaluation to incrementally improve your model
Workshop 4: Deploy Your Model
This is accompanied by 5+ hours of office hours. Lectures explain the why and demonstrate the how for all the key pieces in LLM fine-tuning. Your hands-on-experience in the course project will ensure your ready to apply your new skills in real business scenarios.
The Fine-Tuning course has these guest speakers:
- Shreya Shankar: LLMOps and LLM Evaluations researcher
- Zach Mueller: Lead maintainer of HuggingFace accelerate
- Bryan Bischof: Director of AI Engineering at Hex
- Charles Frye: AI Engineer at Modal Labs
- Eugene Yan: Senior Applied Scientist @ Amazon
- Harrison Chase: CEO of LangChain
- Travis Addair: Co-Founder & CTO of Predibase
- Joe Hoover: Lead ML Engineer at Replicate
FAQ:
-------
Q: It says this course already started. Should I still Enroll?
A: Yes. Everything is recorded, so you can watch videos for any events that have happened so far, join for live events moving forward, and even learn from talks long after the conference is over.
Q: Will there be a future cohort?
A: No. We were fortunate to have so many world-class speakers. We don't think this can be replicated, so it is now a one-time-only event with all recordings available.
Q: Are you still giving out free compute credits?
A: No. Students who enrolled after 5/29/2024 are not eligible for compute credits.
Who Is It For?
01
Data scientists looking to repurpose skills from conventional ML into LLMs and generative AI
02
Software engineers with Python experience looking to add the newest and most important tools in tech
03
Programmers who have called LLM APIs that now want to take their skills to the next level by building and deploying fine-tuned LLMs
01
Data scientists looking to repurpose skills from conventional ML into LLMs and generative AI
02
Software engineers with Python experience looking to add the newest and most important tools in tech
03
Programmers who have called LLM APIs that now want to take their skills to the next level by building and deploying fine-tuned LLMs
What you’ll get out of this conference
Connect With A Large Community Of AI Practitioners
Discord with 1000+ members attending the conference.
Learn more about LLMs
Topics such as RAG, Evals, Inference, Fine-Tuning, are covered.
Learn about the best tools
We have curated the tools that we like the most. Credits for many of these tools are provided.
Learn about fine-tuning in-depth
This conference used to be a fine-tuning LLMs course. That course is still here, and takes place over the course of 4 workshops.
What’s included
Live sessions
Learn directly from Dan Becker & Hamel Husain in a real-time, interactive format.
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
Course syllabus
34 live sessions • 13 lessons
Week 1
Jan
1
Fine-Tuning Workshop 1: When and Why to Fine-Tune an LLM
When and Why to Fine-Tune an LLM
Week 2
Jan
8
Fine-Tuning Workshop 2: Fine-Tuning with Axolotl (guest speakers Wing Lian, Zach Mueller)
Jan
11
Conference Talk: From prompt to model: fine tuning when you've already deployed LLMs in prod (with Kyle Corbitt)
Jan
11
Office Hours: Axolotl w/Wing Lian
Jan
11
Office Hours: FSDP, DeepSpeed and Accelerate w/Zach Mueller
Fine-Tuning with Axolotl
Week 3
Jan
15
Office Hours: Gradio w/ Freddy Boulton
Jan
15
Fine-Tuning Workshop 3: Instrumenting & Evaluating LLMs (guest speakers Harrison Chase, Bryan Bischof, Shreya Shankar, Eugene Yan)
Jan
16
Conference Talk: LLM Eval For Text2SQL w/ Ankur Goyal
Jan
16
Conference Talk: Prompt Engineering Workshop w/John Berryman
Jan
16
Conference Talk: Inspect, An OSS framework for LLM evals w/ JJ Allaire
Jan
17
Office Hours: Modal w/ Charles Frye
Jan
17
Office Hours: LangChain/LangSmith
Jan
18
Conference Talk: Napkin Math For Fine Tuning w/Johno Whitaker
Jan
18
Conference Talk: Train (almost) any llm model using 🤗 autotrain
Jan
18
Optional: Johno Whitaker round 2
Instrumenting and Evaluating LLM's for Incremental Improvement
Week 4
Jan
22
Fine-Tuning Workshop 4: Deploying Fine-Tuned Models (Guest speakers Travis Addair, Charles Frye, Joe Hoover)
Jan
23
Conference Talk: Best Practices For Fine Tuning Mistral w/ Sophia Yang
Jan
23
Conference Talk: Creating, curating, and cleaning data for LLMs w/Daniel van Strien
Jan
23
Conference Talk: Why Fine-Tuning is Dead w/ Emmanuel Ameisen
Jan
24
Conference Talk: Systematically improving RAG applications w/Jason Liu
Jan
24
Conference Talk: Build Applications For LLMs in Python, with Jeremy Howard & Johno Whitaker
Jan
25
Optional: Getting the most out of your LLM experiments w/ Thomas Capelle
Deploying Your Fine-Tuned Model
Week 5
Jan
28
Conference Talk: Slaying OOMs with PyTorch FSDP and torchao (with Mark Saroufim and Jane Xu)
Jan
29
Conference Talk: When to Fine-Tune? (with Paige Bailey)
Jan
29
Conference Talk: Beyond the basics of Retrieval for Augmenting Generation (w/ Ben Clavié)
Jan
29
Conference Talk: Modal: Simple Scalable Serverless Services With Charles Frye
Jan
29
Optional: Replicate Office Hours
Jan
29
Conference Talk: A Deep Dive on LLM Evaluation (w/ Hailey Schoelkepf)
Jan
30
Conference Talk: Language models on the command-line w/ Simon Willison
Jan
30
Office Hours: Predibase w/ Travis Addair
Jan
30
Conference Talk: Fine-Tuning OpenAI Models - Best Practices w/Steven Heidel
Jan
30
Optional: Fine Tuning LLMs for Function Calling
Week 6
Feb
5
Back to Basics for RAG w/Jo Bergum
Feb
8
Optional: LiveStream - Lessons From A Year of Building w/LLMs
Week 7
Week 8
Week 9
Week 10
Week 11
Week 12
Week 13
Week 14
Week 15
Week 16
Week 17
Week 18
Week 19
Week 20
Week 21
Week 22
Week 23
Week 24
Week 25
Week 26
Week 27
Week 28
Week 29
Week 30
Week 31
Week 32
Week 33
Week 34
Week 35
Week 36
Week 37
Week 38
Week 39
Week 40
Week 41
Week 42
Week 43
Week 44
Week 45
Week 46
Week 47
Week 48
Week 49
Week 50
Week 51
Week 52
Week 53
Week 54
Week 55
Week 56
Week 57
Week 58
Week 59
Week 60
Week 61
Week 62
Week 63
Week 64
Week 65
Week 66
Week 67
Week 68
Week 69
Week 70
Week 71
Week 72
Week 73
Week 74
Week 75
Week 76
Week 77
Week 78
Week 79
Week 80
Week 81
Week 82
Week 83
Week 84
Week 85
Week 86
Week 87
Week 88
Week 89
Week 90
Week 91
Week 92
Week 93
Week 94
Week 95
Week 96
Week 97
Week 98
Week 99
Week 100
Week 101
Week 102
Week 103
Week 104
Week 105
Week 106
Week 107
Week 108
Week 109
Week 110
Week 111
Week 112
Week 113
Week 114
Week 115
Week 116
Week 117
Week 118
Week 119
Week 120
Week 121
Week 122
Week 123
Week 124
Week 125
Week 126
Week 127
Week 128
Week 129
Week 130
Week 131
Week 132
Week 133
Week 134
Week 135
What students are saying
Meet your instructors / conference organizers
Dan Becker
Dan Becker
Chief Generative AI Architect @ Straive
Dan has worked in AI since 2011, when he finished 2nd (out of 1350+ teams) in a Kaggle competition with a $500k prize. He contributed code to TensorFlow as a data scientist at Google and he has taught online deep learning courses to over 250k people. Dan has advised AI projects for 6 companies in the Fortune 100.
Hamel Husain
Hamel Husain
Founder @ Parlance Labs
Hamel is an ML engineer who loves building machine learning infrastructure and tools 👷🏼♂️. He leads or contribute to many popular open-source machine learning projects. His extensive experience (20+ years) as a machine learning engineer spans various industries, including large tech companies like Airbnb and GitHub.
Hamel is an independent consultant helping companies operationalize LLMs. At GitHub, Hamel lead CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot, a large language model used by millions of developers.
Be the first to know about upcoming cohorts
Mastering LLMs For Developers & Data Scientists
Course schedule
4-6 hours per week
Tuesdays
1:00pm - 3:00pm EST
Interactive weekly workshops where you will learn the tools you will apply in your course project.
Weekly projects
2 hours per week
You will build and deploy an LLM as part of the course project. The course project is divided into four weekly project.
By the end, you will not only know about fine-tuning, but you will have hands-on experience doing it.
Be the first to know about upcoming cohorts