| CARVIEW |
About
Email:
I am an Assistant Professor in the Computer Science Department at Carnegie Mellon University, and also a Research Scientist at Google DeepMind on the Magenta team (part-time).
My research goal is to develop and responsibly deploy generative AI for music and creativity, thereby unlocking and augmenting human creative potential. To this end, my work involves (1) improving machine learning methods for controllable generative modeling for music, audio, and other sequential data, and (2) deploying real-world interactive systems that allow a broader audienceβinclusive of non-musiciansβto harness generative music AI through intuitive controls.
I am particularly drawn to research ideas with direct real-world applications, and my work often involves building systems for real users to be evaluated in-the-wild. For example, my work on Piano Genie was used in a live performance by The Flaming Lips, and my work on Dance Dance Convolution powers Beat Sage, a live service used by thousands of users a day to create multimodal music game content.
Previously, I was a postdoc at Stanford CS advised by Percy Liang. Before that, I completed a PhD at UCSD co-advised by Miller Puckette and Julian McAuley.
News
- πΆ (Dec 2025) Our project on multimodal music AI, co-led by Dr. Annie Hsieh, was awarded a grant through the Schmidt HAVI program
- π€ (Dec 2025) Two invited talks at NeurIPS 2025 Workshops: GenProCC (recording) and AI4Music (recording)
- π§βπ« (Nov 2025) Reappointed as Assistant Professor at CMU.
- π€ (Oct 2025) Invited talk What music can teach language models, CMU LTI Colloquium (recording)
- π (Sep 2025) Two papers (Music Arena, Live Music Models) accepted to NeurIPS 2025 Creative AI Track, to be presented in the main conference.
- π (Sep 2025) Paper accepted to the LLM4Music workshop @ ISMIR 2025
- π (Aug 2025) My Google research project SingSong featured in the Pixel Recorder app
- π (Jul 2025) Music Arena released (paper)!
- π (Jul 2025) Paper accepted at WASPAA 2025 on sound morphing.
- π (Jul 2025) Two papers accepted to ISMIR 2025 on music evaluation and real-time adaptation (pre-print forthcoming).
- π (Jun 2025) Two papers accepted at ICML 2025 workshops (R2-FM, DataWorld, pre-prints forthcoming).
- π (Jun 2025) Led the effort of a new open weights real-time music generation model along with my team at Google DeepMind: Magenta RealTime.
- π (May 2025) Our paper on Copilot Arena accepted to ICML 2025.
- ποΈ (May 2025) My PhD student Wayne Chi (co-advised w/ Ameet Talwalkar) quoted in WSJ article.
- ποΈ (Apr 2025) CMU SCS news article featuring Copilot Arena.
- π (Apr 2025) Our paper on co-design for audio codec LMs received the Best Paper Award (top 1) at the NAACL Student Research Workshop 2025.
- π (Apr 2025) My PhD student Wayne Chi (co-advised w/ Ameet Talwalkar) has received the NDSEG Fellowship.
- π (Mar 2025) Paper on Copilot Arena accepted to the HEAL@CHI workshop.
- πΆ (Mar 2025) Project proposal w/ Annie Hsieh (CMU CFA) funded by the AIxArts incubator fund at CMU.
- π οΈ (Mar 2025) Workshop proposal on ML for Audio accepted at ICML 2025.
- π (Mar 2025) Our paper on AMUSE recognized with a Best Paper Award (top 1% of submissions) at CHI 2025.
- π (Mar 2025) Paper accepted at the NAACL Student Research Workshop 2025.
- πΆ (Mar 2025) Shoutout from Darkside for helping them train RAVE for their album Nothing.
- π (Feb 2025) Our work on VERSA accepted to NAACL Demo Track 2025.
- π (Feb 2025) New pre-print on Copilot Arena
- π (Jan 2025) Our work on AMUSE accepted to CHI 2025.
- ποΈ (Nov 2024) Blog post on Copilot Arena
- π (Nov 2024) One paper accepted at the NeurIPS 2024 Open World Agents Workshop.
- π€ (Oct 2024) Invited talk at SANE 2024 (video, slides).
- π (Oct 2024) Launch of Copilot Arena, a VSCode extension for evaluating LLMs for coding assistance.
- π (Oct 2024) Three extended abstracts to appear at ISMIR Late Breaking Demos.
- π (Oct 2024) One paper accepted at the NeurIPS 2024 Audio Imagination Workshop.
- π (Aug 2024) Official launch of Hookpad Aria, a Copilot for songwriters.
- π (Jun 2024) Our work on Music-aware Virtual Assistants accepted at UIST 2024.
- π (Jun 2024) Two papers accepted to ISMIR 2024.
- πͺ (Apr 2024) Named as the Dannenberg Assistant Professor of Computer Science.
- ποΈ (Apr 2024) Serving as Senior Program Committee Co-chair for ISMIR 2024 - record number of submissions (415).
- π (Mar 2024) A Copilot-like tool for musicians featuring the Anticipatory Music Transformer was launched in beta.
- π (Mar 2024) Music ControlNet to appear in TASLP (IEEE/ACM Transactions on Audio, Speech, and Language Processing).
- π (Mar 2024) Anticipatory Music Transformer to appear in TMLR (Transactions on Machine Learning Research).
- π (Mar 2024) Launch of MusicFX DJ Mode, a real-time music audio generation tool developed by my team at Google.
- ποΈ (Nov 2023) SingSong incorporated into Google DeepMind’s Music AI Tools.
- π (Nov 2023) Work presented at HCMIR Workshop by Michael Feffer.
- π (Nov 2023) New preprint on controllable music gen led by Shih-Lun Wu (applying to PhD positions!)
- ποΈ (Oct 2023) Interviewed for Pitchfork article about MusicLM
- π€ (Oct 2023) Invited talk at Stanford HAI Conference (recording, slides)
- π§βπ« (Oct 2023) Guest lecture for CMU LLM Course (slides)
- π (Oct 2023) New PhD students: Irmak Bukey and Wayne Chi
- π§βπ« (Sep 2023) Started as Assistant Professor at CMU
G-CLef

I lead the Generative Creativity Lab (G-CLef) at CMU. Our mission is to empower and enrich human creativity and productivity with generative AI. We focus primarily on the intersection of music and AI, though we also work on other applications such as programming, gaming, and natural language. Please visit this page to learn more about our research interests and to apply.
Mentees
CSD PhD student
CSD PhD student
Coadvised w/ Ameet T.
CSD PhD student
CSD PhD student
Alumni
Music Tech MS student
Now PhD @ CMU HCII (incoming)
Visiting researcher
CS MS student
Now Quant @ Minhong
LTI MS student
Now PhD @ MIT EECS






