| CARVIEW |
NeurIPS 2025 Workshop on AI for Music:
Where Creativity Meets Computation
December 7 @ Room 27, San Diego Convention Center
Contact: aiformusicworkshop@gmail.com
Register here! (A “Workshops & Competitions” pass is required to attend the workshop.)
Instructions for the presenters
Shortcuts
📸 Gallery

ℹ️ Description
This workshop explores the dynamic intersection of AI and music, a rapidly evolving field where creativity meets computation. Music is one of the most universal and emotionally resonant forms of human expression. Producing, understanding, and processing music presents unique challenges for machine learning due to its creative, expressive, subjective, and interactive nature. According to the IFPI Global Music Report 2025 published by the International Federation of the Phonographic Industry (IFPI), “AI will be one of the defining issues of our time and record companies have embraced its potential to enhance artist creativity and develop new and exciting fan experiences.” AI has tremendous impacts on all aspects of music, across composition, production, performance, distribution, and education. Recent years have also seen rapidly growing interests among the machine learning community in AI music research. In this first NeurIPS workshop dedicated to music since 2011, we want to bring together the music and AI communities to facilitate a timely, interdisciplinary conversation on the status and future of AI for music.
The goal of this workshop is twofold: First, we aim to explore the latest advancements of AI’s applications for music, from analysis, creation, performance, production, retrieval to music education and therapy. Second, we aim to discuss the impacts and implications of AI in music, including AI’s impacts on the music industry, musician community, and music education as well as ethical, legal and societal implications of AI music and AI’s implications for future musicians. We will emphasize networking and community building in this workshop to generate a sustainable research momentum.
The workshop will feature invited talks, contributed spotlight presentations, a poster and demo session, a panel discussion, and round table discussions. We have invited six speakers from a diverse background who will bring interdisciplinary perspectives to the audience. We will solicit original 4-page papers from the community. We will also call for demos accompanied by 2-page extended abstracts to accommodate the various formats AI music innovations may take.
📅 Schedule
| 8:00 - 8:10 | Opening Remarks |
| 8:10 - 8:40 | Invited Talk by Chris Donahue (CMU & Google Deepmind) Quantifying creativity? Rigorous evaluation for music AI |
| 8:40 - 9:10 | Invited Talk by Shlomo Dubnov (UC San Diego) Improvisation Agents and Multi-track Music Information Dynamics |
| 9:10 - 10:10 | ☕Coffee Break + Posters & Demos I |
| 10:10 - 10:40 | Lightning Talks |
| 10:40 - 11:10 | Oral Presentations StylePitcher: Generating Style-Following, Expressive Pitch Curves for Versatile Singing Tasks Jingyue Huang, Qihui Yang, Fei-Yueh Chen, Randal Leistikow, Yongyi Zang E-Motion Baton: Human-in-the-Loop Music Generation via Expression and Gesture Mingchen Ma, Stephen Ni-Hahn, Simon Mak, Yue Jiang, Cynthia Rudin |
| 11:10 - 11:40 | Invited Talk by Ilaria Manco (Google Deepmind) Real-time Music Generation: Lowering Latency and Increasing Control |
| 11:40 - 12:10 | Invited Talk by Akira Maezawa (Yamaha) Design of AI-based Interactive Music Performance Systems in the Wild |
| 12:10 - 1:10 | 🍴Lunch Break |
| 1:10 - 2:00 | Posters & Demos II |
| 2:00 - 2:30 | Invited Talk by Julian McAuley (UC San Diego) Opportunities and Challenges in Music Recommendation |
| 2:30 - 3:00 | Invited Talk by Anna Huang (MIT) In Search of Human-AI Resonance |
| 3:00 - 4:00 | ☕Coffee Break + Posters & Demos III |
| 4:00 - 4:50 | Panel Discussion (with invited speakers) |
| 4:50 - 5:00 | Closing Remarks |
💡 Invited Speakers
Chris Donahue is the Dannenberg Assistant Professor in the Computer Science Department (CSD) at Carnegie Mellon University, and a part-time Research Scientist at Google DeepMind. At CMU, Chris leads the Generative Creativity Lab (G-CLef), with a mission to empower and enrich human creativity and productivity with generative AI. This primarily involves work at the intersection of music and AI, though also includes work on other applications such as code, gaming, and language. Chris’s research has also translated to real-world impact in music, from integration into performances by professional bands (The Flaming Lips) to consumer music AI tools (Beat Sage, Hookpad Aria) that have been used hundreds of thousands of times. Before CMU, Chris was a postdoctoral scholar in the CS department at Stanford advised by Percy Liang. Chris holds a PhD from UC San Diego where he was jointly advised by Miller Puckette (music) and Julian McAuley (CS).
Shlomo Dubnov is a Professor in Music and CSE departments in the University of California, San Diego. He is a founding faculty of the Halıcıoğlu Data Science Institute and a Director of the Center for Research in Entertainment and Learning (CREL) at UC San Diego’s Qualcomm Institute.
Ilaria Manco is a Research Scientist in the Magenta team at Google DeepMind. Her research spans music generation and understanding, with a current focus on new forms of musical interaction via controllable, real-time generative models. Ilaria received her PhD from Queen Mary University of London, where she developed multimodal representation learning approaches to connect music and language for a variety of music understanding tasks.
Akira Maezawa leads music informatics research group MINA Lab at Yamaha, where he and his team have developed music technologies integrated into mobile applications, digital instruments, and interactive systems featured in concert events and installations around the world. The projects he has led have earned numerous awards and distinctions, including the Entertainment Lions for Music at the Cannes Lions International Festival of Creativity and the Research and Engineering Award from the Information Processing Society of Japan.
Julian McAuley is a Professor in the Department of Computer Science and Engineering at the University of California San Diego. He works on applications of machine learning to problems involving personalization, and teaches classes on personalized recommendation. He likes bicycling and baroque keyboard.
Anna Huang is an Associate Professor at MIT, with a shared position in Music and Theater Arts (MTA) and Electrical Engineering and Computer Science (EECS). She joined MIT last Fall to help start a new graduate program in Music Technology and Computation. This year, she is the Rieman and Baketel Fellow for Music at the Harvard Radcliffe Institute. For the past decade, she has been part of the Magenta team in Google Brain and then Google DeepMind, spearheading efforts in generative modeling, reinforcement learning, and human-computer interaction. In 2017, she created Music Transformer, the first successful adaptation of the transformer architecture to music. She is also the creator of the machine learning model Coconet that powered Google Bach Doodle, which in two days harmonized 55 million melodies from users around the world. From 2020-2022, she was a judge and organizer for the AI Song Contest. Now at MIT, she directs the Human-AI Resonance Lab (HAI-Res, pronounced Hi-Res) at CSAIL (Computer Science and Artificial Intelligence Laboratory).
🤩 Organizers
Hao-Wen (Herman) Dong is an Assistant Professor in the Department of Performing Arts Technology at the University of Michigan. Herman’s research aims to augment human creativity with machine learning. He develops human-centered generative AI technology that can be integrated into the professional creative workflow, with a focus on music, audio and video content creation. His long-term goal is to lower the barrier of entry for content creation and democratize professional content creation for everyone. Herman received his PhD degree in Computer Science from University of California San Diego, where he worked with Julian McAuley and Taylor Berg-Kirkpatrick. His research has been recognized by the UCSD CSE Doctoral Award for Excellence in Research, KAUST Rising Stars in AI, UChicago and UCSD Rising Stars in Data Science, ICASSP Rising Stars in Signal Processing, and UCSD GPSA Interdisciplinary Research Award.
Zachary Novack is a PhD student in the Computer Science and Engineering department at the University of California San Diego, advised by Dr. Julian McAuley and Dr. Taylor Berg-Kirkpatrick. His research focuses on controllable and efficient music/audio generation, as well as audio reasoning in LLMs. His long-term goal is to design bespoke creative tools for musicians and everyday users alike with adaptive control and real-time interaction, collaborating with top industry labs such as Adobe Research, Stability AI, and Sony AI. Zachary’s work has been recognized at numerous top-tier AI conferences, including DITTO (ICML 2024 Oral), Presto! (ICLR 2025 Spotlight), CoLLAP (ICASSP 2025 Oral). Outside of academia, Zachary is active within the southern California marching arts community, working as an educator for the 11-time world class finalist percussion ensemble POW Percussion.
Yung-Hsiang Lu is a Professor in the Elmore Family School of Electrical and Computer Engineering at Purdue University. He is a fellow of the IEEE and a distinguished scientist of the ACM. Yung-Hsiang has published papers on computer vision and machine learning in venues such as AI Magazine, Nature Machine Learning, and Computer. He is one of the editors of the book “Low-Power Computer Vision: Improve the Efficiency of Artificial Intelligence” (ISBN 9780367744700, 2022 by Chapman & Hall).
Kristen Yeon-Ji Yun is a Clinical Associate Professor in the Department of Music at the Patti and Rusty Rueff School of Design, Art, and Performance at Purdue University. She is the Principal Investigator of the research project “Artificial Intelligence Technology for Future Music Performers” (US National Science Foundation, IIS 2326198). Kristen is an active soloist, chamber musician, musical scholar, and clinician. She has toured many countries, including Malaysia, Thailand, Germany, Mexico, Japan, China, Hong Kong, Spain, France, Italy, Taiwan, and South Korea, giving a series of successful concerts and master classes.
Benjamin Shiue-Hal Chou is a PhD student in Electrical and Computer Engineering at Purdue University, advised by Dr. Yung-Hsiang Lu. His research focuses on music performance error detection and the design of multimodal architectures. He is the lead author of Detecting Music Performance Errors with Transformers (AAAI 2025) and a co-author of Token Turing Machines are Efficient Vision Models (WACV 2025). Benjamin is the graduate mentor for the Purdue AIM (AI for Musicians) group and has helped organize the Artificial Intelligence for Music workshop at both AAAI 2025 and ICME 2025. He is currently interning at Reality Defender, where he works on audio deepfake detection.
🔍 Program Committee
(In alphabetical order)
Erfun Ackley, Julia Barnett, Luca Bindini, Kaj Bostrom, Brandon James Carone, Yunkee Chae, Ke Chen, Xi Chen, Manuel Cherep, Joann Ching, Eunjin Choi, Woosung Choi, Benjamin Shiue-Hal Chou, Annie Chu, Charis Cochran, Frank Cwitkowitz, SeungHeon Doh, Chris Donahue, Hao-Wen Dong, Shlomo Dubnov, Victoria Ebert, Hugo Flores García, Ross Greer, Timothy Greer, Jiarui Hai, Wen-Yi Hsiao, Jiawen Huang, Jingyue Huang, Manh Pham Hung, Yun-Ning Hung, Tatsuro Inaba, Purvish Jajal, Dasaem Jeong, Molly Jones, Kexin Phyllis Ju, Tornike Karchkhadze, Haven Kim, Jinju Kim, Shinae Kim, Soo Yong Kim, Yewon Kim, Yonghyun Kim, Junyoung Koh, Zhifeng Kong, Junghyun Koo, Luca A Lanzendörfer, Jongpil Lee, Junwon Lee, Wei-Jaw Lee, Wo Jae Lee, Bochen Li, Xueyan Li, Duoduo Liao, Jia-Wei Liao, Brian Lindgren, Jeng-Yue Liu, Phillip Long, Yung-Hsiang Lu, Yin-Jyun Luo, Mingchen Ma, Akira Maezawa, Ilaria Manco, Jiawen Mao, Atharva Mehta, Giovana Morais, Oriol Nieto, Stephen Ni-Hahn, Zachary Novack, Patrick O’Reilly, Seungryeol Paik, Ting-Yu Pan, Eleonora Ristori, Iran R Roman, Sherry Ruan, Koichi Saito, Rebecca Salganik, Sridharan Sankaran, Jiatong Shi, Nikhil Singh, Christian J. Steinmetz, Li Su, Chih-Pin Tan, Jiaye Tan, John Thickstun, James Townsend, Fang-Duo Tsai, Teng Tu, Ziyu Wang, Christopher W. White, Shih-Lun Wu, Yusong Wu, Weihan Xu, Xin Xu, Yujia Yan, Chao Peter Yang, Guang Yang, Qihui Yang, Yen-Tung Yeh, Jayeon Yi, Chin-Yun Yu, Kristen Yeon-Ji Yun, Yongyi Zang, Fan Zhang, Wenxin Zhang, Yixiao Zhang, You Zhang, Jingwei Zhao, Ge Zhu
🤝 Sponsors
Gold Sponsors
Silver Sponsors
📜 Accepted Papers & Demos
Acceptance rate: 68% (73/108) | Paper: 69% (59/85) | Demo: 61% (14/23)
Accepted Papers
-
MGE-LDM: Joint Latent Diffusion for Simultaneous Music Generation and Source Extraction
Yunkee Chae, Kyogu Lee -
Soundtrack Retrieval for Film Production
Bill Wang, Haven Kim, Leduo Chen, Minje Kim, Julian McAuley -
MIDI-LLM: Adapting Large Language Models for Text-to-MIDI Music Generation
Shih-Lun Wu, Yoon Kim, Cheng-Zhi Anna Huang -
Advancing Multi-Instrument Music Transcription: Results from the 2025 AMT Challenge
Ojas Chaturvedi, Kayshav Bhardwaj, Tanay Gondil, Benjamin Shiue-Hal Chou, Yujia Yan, Kristen Yeon-Ji Yun, Yung-Hsiang Lu, Sungkyun Chang -
Semitone-Aware Fourier Encoding: A Music-Structured Approach to Audio-Text Alignment
Chengze Du, JinYang Zhang, Wenxin Zhang
▶️Demo -
Zero-shot Geometry-Aware Diffusion Guidance for Music Restoration
Jia-Wei Liao, Pin-Chi Pan, Li-Xuan Peng, Sheng-Ping Yang, Yen-Tung Yeh, Cheng-Fu Chou, Yi-Hsuan Yang -
AIBA: Attention-based Instrument Band Alignment for Text-to-Audio Diffusion
Junyoung Koh, Soo Yong Kim, GYU HYEONG CHOI, Yongwon choi -
My Music My Choice: Adversarial Protection Against Vocal Cloning in Songs
Ilke Demir, Gerald Pena Vargas, Alicia Unterreiner, David Ponce, Umur A. Ciftci -
PANDORA: Diffusion Policy Learning for Dexterous Robotics Piano Playing with a Train-only LLM Expressiveness Reward
Yanjia Huang, Renjie Li, Zhengzhong Tu
▶️Demo -
Rhythmic Stability and Synchronization in Multi-Track Music Generation
Anonymous
▶️Demo -
Audio-to-Audio Schrodinger Bridges
Kevin J. Shih, Zhifeng Kong, Weili Nie, Arash Vahdat, Sang-gil Lee, Joao Felipe Santos, Ante Jukić, Rafael Valle, Bryan Catanzaro
▶️Demo -
Multimodal Music Tokenization with Residual Quantization for Generative Retrieval
Wo Jae Lee, Emanuele Coviello, Rifat Joyee, Sudev Mukherjee -
AMBISONIC-DML: Higher-Order Ambisonic Music Dataset for Spatial AI Generation
Seungryeol Paik, Kyogu Lee -
Video-to-Music Generation for Film Production: A Dataset and Framework
Haven Kim, Leduo Chen, Bill Wang, Hao-Wen Dong, Julian McAuley -
Segment-Factorized Full-Song Generation on Symbolic Piano Music
Ping-Yi Chen, Chih-Pin Tan, Yi-Hsuan Yang -
When Creative Machines Learn from Each Other
Haven Kim, Yusong Wu, Taylor Berg-Kirkpatrick, Julian McAuley -
FlashFoley: Fast Interactive Sketch2Audio Generation
Zachary Novack, Koichi Saito, Zhi Zhong, Takashi Shibuya, Shuyang Cui, Julian McAuley, Taylor Berg-Kirkpatrick, christian simon, Shusuke Takahashi, Yuki Mitsufuji -
Generative Multi-modal Feedback for Singing Voice Synthesis Evaluation
Xueyan Li, Yuxin Wang, Mengjie Jiang, Qingzi Zhu, Jing Zhang, Zoey Kim, Yazhe Niu -
Bias beyond Borders: Global Inequalities in AI-Generated Music
Ahmet Solak, Florian Grötschla, Luca A Lanzendörfer, Roger Wattenhofer -
ENHANCING TEXT-TO-MUSIC GENERATION THROUGH RETRIEVAL-AUGMENTED PROMPT REWRITE
Meiying Ding, Brian McFee, Chenkai Hu, Sunny Yang, Juhua Huang -
The Ghost in the Keys: A Disklavier Demo for Human-AI Musical Co-Creativity
Louis Bradshaw, Alexander Spangher, Stella Biderman, Simon Colton
▶️Demo -
Persian Musical Instruments Classification Using Polyphonic Data Augmentation
Diba Hadi Esfangereh, Mohammad Hossein Sameti, Sepehr Harfi Moridani, Leili Javidpour, Mahdieh Soleymani Baghshah
▶️Demo -
Discovering and Steering Interpretable Concepts in Large Generative Music Models
Nikhil Singh, Manuel Cherep, Patricia Maes -
Linear RNNs for autoregressive generation of long music samples
Konrad Szewczyk, Daniel Gallo Fernández, James Townsend -
Music to Video Matching Based on Beats and Tempo
Aleksandr Mikheev, Ilya Makarov -
Multi-bit Audio Watermarking for Music
Luca A Lanzendörfer, Kyle Fearne, Florian Grötschla, Roger Wattenhofer -
LEGATO: Large-scale End-to-end Generalizable Approach to Typeset OMR
Guang Yang, Victoria Ebert, Nazif Can Tamer, Luiza Amador Pozzobon, Noah A. Smith -
MuCPT: Music-related Natural Language Model Continued Pretraining
Kai Tian, Yirong Mao, Wendong Bi, Hanjie Wang, Que Wenhui -
Composer Vector: Style-steering Symbolic Music Generation in a Latent Space
Xunyi Jiang, Xin Xu -
Perceptually Aligning Representations of Music via Noise-Augmented Autoencoders
Mathias Rose Bjare, Giorgia Cantisani, Marco Pasini, Stefan Lattner, Gerhard Widmer -
SepACap: Source Separation for A Cappella Music
Luca A Lanzendörfer, Constantin Pinkl, Florian Grötschla, Roger Wattenhofer -
Ethics Statements in AI Music Papers: The Effective and the Ineffective
Julia Barnett, Patrick O’Reilly, Jason Smith, Annie Chu, Bryan Pardo -
Beyond Collaborative Filtering: Using Decoders for Personalized Music Recommendation
Timothy Greer, Nicholas Capel, Emanuele Coviello, Amina Shabbeer
▶️Demo -
Do Joint Language-Audio Embeddings Encode Perceptual Timbre Semantics?
Qixin Deng, Bryan Pardo, Thrasyvoulos N Pappas -
CLAM: Safeguarding Authenticity and Addressing Implications for the Music Industry
Arnesh Batra, Krish Thukral, Naman Batra, Dev Sharma, Ruhani Bhatia, Aditya Gautam
▶️Demo -
MusicSem: A Semantically Rich Language-Audio Dataset of Organic Musical Discourse
Rebecca Salganik, Teng Tu, Fei-Yueh Chen, Xiaohao Liu, Kaifeng Lu, Ethan Luvisia, Zhiyao Duan, Guillaume Salha-Galvan, Anson Kahng, Yunshan Ma, Jian Kang -
Chord-conditioned Melody and Bass Generation
Alexandra C Salem, Mohammad Shokri, Johanna Devaney -
Why Do Music Models Plagiarize? A Motif-Centric Perspective
Tatsuro Inaba, Kentaro Inui -
StylePitcher: Generating Style-Following and Expressive Pitch Curves for Versatile Singing Tasks
Jingyue Huang, Qihui Yang, Fei-Yueh Chen, Julian McAuley, Randal Leistikow, Yongyi Zang -
Embedding Alignment in Code Generation for Audio
Sam Kouteili, Hiren Madhu, George Typaldos, Mark Paul Santolucito -
Using a Joint-Embedding Predictive Architecture for Symbolic Music Understanding
Rafik Hachana, Bader Rasheed -
Robust Neural Audio Fingerprinting using Music Foundation Models
Shubhr Singh, Kiran Bhat, Benjamin Resnick, Xavier Riley, John Thickstun, Walter De Brouwer -
DAWZY: A New Addtion to AI powered “Human in the Loop” Music Co-creation
Aaron C Elkins, Sanchit Singh, Adrian Kieback, Sawyer Blankenship, Uyiosa Philip Amadasun, Aman Chadha
▶️Demo -
A Loopy Framework and Tool for Real-time Human-AI Music Collaboration
Sageev Oore, Finlay Miller, Chandramouli Shama Sastry, Sri Harsha Dumpala, Marvin F. da Silva, Daniel Oore, Scott C. Lowe -
MusPyExpress: Extending MusPy with Enhanced Expression Text Support
Phillip Long, Hao-Wen Dong, Julian McAuley, Zachary Novack -
Adapting Speech Language Model to Singing Voice Synthesis
Yiwen Zhao, Jiatong Shi, Jinchuan Tian, Yuxun Tang, Jiarui Hai, Jionghao Han, Shinji Watanabe -
The Name-Free Gap: Policy-Aware Stylistic Control in Music Generation
Ashwin Nagarajan, Hao-Wen Dong -
BOSSA: Learning Music Style Through Cross-Modal Bootstrapping
Jingwei Zhao, Ziyu Wang, Gus Xia, Ye Wang
▶️Demo -
Who Gets Heard? Rethinking Fairness in AI for Music Systems
Atharva Mehta, Shivam Chauhan, Megha Sharma, Gus Xia, Kaustuv Kanti Ganguli, Nishanth Chandran, Zeerak Talat, Monojit Choudhury -
Memership and Dataset Inference Attacks on Large Audio Generative Models
Jakub Proboszcz, Paweł Kochański, Karol Korszun, Katarzyna Stankiewicz, Donato Crisostomi, Giorgio Strano, Emanuele Rodolà, Kamil Deja, Jan Dubiński -
From Generation to Attribution: Music AI Agent Architectures for the Post-Streaming Era
Wonil Kim, Hyeongseok Wi, Seungsoon Park, Taejun Kim, Sangeun Keum, Keunhyoung Kim, Taewan Kim, Jongmin Jung, Taehyoung Kim, Gaetan Guerrero, Mael Le Goff, Julie Po, Dongjoo Moon, Juhan Nam, Jongpil Lee
▶️Demo -
Asura’s Harp: Direct Latent Control of Neural Sound
Kaj Bostrom -
TalkPlay-Tools: Conversational Music Recommendation with LLM Tool Calling
SeungHeon Doh, Keunwoo Choi, Juhan Nam -
No Encore: Unlearning as Opt-Out in Music Generation
Jinju Kim, Taehan Kim, Abdul Waheed, Jong Hwan Ko, Rita Singh -
Generating Piano Music with Transformers: A Comparative Study of Scale, Data, and Metrics
Jonathan Lehmkuhl, Ábel Ilyés-Kun, Nico Bremes, Cemhan Kaan Özaltan, Frederik Muthers, Jiayi Yuan -
BNMusic: Blending Environmental Noises into Personalized Music
Chi Zuo, Martin B. Møller, Pablo Martínez-Nuevo, Huayang Huang, Yu Wu, Ye Zhu
▶️Demo -
Evaluating Multimodal Large Language Models on Core Music Perception Tasks
Brandon James Carone, Iran R Roman, Pablo Ripollés
▶️Demo -
ACappellaSet: A Multilingual A Cappella Dataset for Source Separation and AI-assisted Rehearsal Tools
Ting-Yu Pan, Kexin Phyllis Ju, Hao-Wen Dong -
Leveraging Diffusion Models For Predominant Instrument Recognition
Charis Cochran, Yeongheon Lee, Youngmoo Kim
Accepted Demos
-
Slimmable NAM: Neural Amp Models with adjustable runtime computational cost
Steven Atkinson
▶️Demo -
Towards AI Rapper: Creating an Interactive Rap Battle Experience with Generative AI
Nikita Kozodoi, Elizaveta Zinovyeva, Zainab Afolabi, Egor Krashenninikov
▶️Demo -
Enhancing Text-to-Music Generation through Retrieval-Augmented Prompt Rewrite Demo
Meiying Ding, Brian McFee, Sunny Yang, Chenkai Hu, Juhua Huang
▶️Demo -
Prompt-Based Music Discovery: A Prototype Using Source Separation And LLMs
Vansh Chugh
▶️Demo -
LyricLens: An Interactive System for Multi-Label Music Content Rating
Kai-Yu Lu, Malhar Sham Ghogare, Zihan Su, Shanu Sushmita
▶️Demo -
AI Harmonica: A Smart Electronic Harmonica for Music Learning and Co-Creativity
Sherry Ruan
▶️Demo -
HARP 3.0: Generalizing I/O and API Support for Machine Learning in Digital Audio Workstations
Frank Cwitkowitz, Christodoulos Benetatos, Qixin Deng, Huiran Yu, Nathan Pruyne, Patrick O’Reilly, Hugo Flores García, Zhiyao Duan, Bryan Pardo
▶️Demo -
E-Motion Baton: Human-in-the-Loop Music Generation via Expression and Gesture
Mingchen Ma, Stephen Ni-Hahn, Simon Mak, Yue Jiang, Cynthia Rudin
▶️Demo -
DAWZY: Human-in-the-Loop Natural-Language Control of REAPER
Aaron C Elkins, Sanchit Singh, Sawyer Blankenship, Adrian Kieback, Uyiosa Philip Amadasun, Aman Chadha
▶️Demo -
Effortless: AI-Augmented Music Composition and Live Performance in Virtual and Mixed Reality
Strong Bear
▶️Demo -
EVxRAVE: Incorporating Neural Synthesis in an Augmented String Instrument Platform
Brian Lindgren
▶️Demo -
Robust Personalized Human-AI Collaboration with SmartLooper
Sageev Oore, Finlay Miller, Chandramouli Shama Sastry, Sri Harsha Dumpala, Marvin F. da Silva, Daniel Oore, Scott C. Lowe
▶️Demo -
Demonstrating Singing accompaniment capabilities for MuseControlLite
Fang-Duo Tsai, Yi-Hsuan Yang -
Mozart AI: Browser-Based AI Music Co-Production
Pascual Merita, Petr Ivan, Immanuel Rajadurai, Sundar Arvind, Arjun Khanna
▶️Demo
📢 Call for Papers & Demos
We invite contributions from the community on relevant topics of AI and music, broadly defined. This workshop is non-archival. Accepted papers will be posted on the workshop website but will not be published or archived. We welcome work that is under review or to be submitted to other venues.
Submission portal: openreview.net/group?id=NeurIPS.cc/2025/Workshop/AI4Music
Call for Papers
We call for original 4-page papers (excluding references) on relevant topics of AI and music, broadly defined. We encourage and welcome papers discussing initial concepts, early results, and promising directions. All accepted papers will be presented in the poster and demo session. A small number of papers will be selected for 10-min spotlight presentations. The review process will be double-blind.
Call for Demos
We call for demos of novel AI music tools and artistic work. Each demo submission will be accompanied by a 2-page extended abstract (excluding references). We encourage the authors to submit an optional short video recording (no longer than 10 min) as supplementary materials. The selected demos will each be assigned a poster board in the poster and demo session. The goal of the demo session is to accommodate the various forms that novel AI music innovations may take, and thus we will adopt a single-blind review process.
Important Dates
The following due dates apply to both paper and demo submissions:
- Submission Deadline:
August 22, AoEAugust 29, AoE - Author Notification Date: September 22, AoE
- Camera-ready Due: November 7, AoE
Topics of Interest
Topics of interest include, but not limited to:
- Applications of AI in music
- Music theory & musicology
- Optical music recognition
- Music transcription
- Music generation
- Sound design & soundtrack generation
- Singing voice synthesis
- Lyric generation and translation
- Musical instrument design
- Robotic musicianship
- Human-AI music co-creativity
- Music production
- Music performance modeling
- Music information retrieval
- Music recommender systems
- Music education
- Music therapy
- Impacts & implications of AI in music
- Impacts on music industry
- Impacts on the musician community
- Impacts on music education
- Implications for future musicians
- Ethical, legal & societal implications of AI music
- Challenges in commercializing AI music tools
- Emerging opportunities of AI music
Formatting Guide
Please format your paper using the NeurIPS 2025 LaTeX template.
- Set the workshop title on line 40:
\workshoptitle{AI for Music} -
For submission, set the options as follows:
- Papers:
\usepackage[dblblindworkshop]{neurips_2025} - Demos:
\usepackage[sglblindworkshop]{neurips_2025}
- Papers:
-
For camera-ready, set the options as follows:
- Papers:
\usepackage[dblblindworkshop,final]{neurips_2025} - Demos:
\usepackage[sglblindworkshop,final]{neurips_2025}
- Papers:
Note that you do NOT need to include the NeurIPS paper checklist. You may include technical appendices (after the references in the main PDF file) and supplementary material (as a ZIP file up to 100 MB). However, it is up to the reviewer to determine if they want to read them.
For demo submissions, you may provide a short video recording (10 minutes maximum) to support your work. We recommend you to provide a YouTube link (or equivalent) to the video recording. Please make sure you have set the visibility to public or unlisted on YouTube. Less preferably, you may upload the video recording as an MP4 file (100 MB maximum).





