| CARVIEW |
Hi there! 👋 I am a Ph.D. Candidate at the Human-Computer Interaction Institute in the School of Computer Science at
Carnegie Mellon University, advised by Professor Nik Martelaro. My research is supported by the
Toyota Research Institute. Previously, I interned at
Adobe and
Runway, and received my B.Eng. in Computer Science from
HKUST, advised by Professors Xiaojuan Ma and Kwang-Ting Cheng. Outside research, I enjoy filming travel videos.
Research Interests
My research vision is to make AI a co-creative partner for designers. I build interactive tools that let designers steer AI models using intuitive and controllable representations — sketching with AI-generated scaffolds, assembling AI model puzzle pieces, and exploring latent space maps. A thread of my research is on AI-assisted video creation, including adding sound effects, identifying highlight moments, and organizing large-scale footage. I bring together my background in interaction design and computer vision to blend AI into designers' workflows.
Publications
Inkspire: Supporting Design Exploration with Generative AI through Analogical Sketching
David Chuan-En Lin, Hyeonsu Kang, Nikolas Martelaro, Aniket Kittur, Yan-Ying Chen, Matthew Hong
ACM Conference on Human Factors in Computing Systems (CHI), 2025
We developed a sketching tool that allows designers to sketch product designs with analogical inspirations and AI-generated shadows beneath the canvas.
BioSpark: Beyond Analogical Inspiration to LLM-augmented Transfer
Hyeonsu Kang, David Chuan-En Lin, Yan-Ying Chen, Matthew Hong, Nikolas Martelaro, Aniket Kittur
ACM Conference on Human Factors in Computing Systems (CHI), 2025
We developed an interactive system that helps designers discover analogical biology inspirations and transfer them to target domains.
NoTeeline: Supporting Real-Time, Personalized Notetaking with LLM-Enhanced Micronotes
Faria Huq, Abdus Samee, David Chuan-En Lin, Xiaodi Alice Tang, Jeffrey Bigham
ACM Conference on Intelligent User Interfaces (IUI), 2025
We built an interactive notetaking tool that lets users write quick keypoints while watching educational videos then automatically expands them into full notes.
Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition
Chun-Hsiao Yeh, Ta-Ying Cheng, He-Yen Hsieh, Chuan-En Lin, Yi Ma, Andrew Markham, Niki Trigoni, H.T. Kung, Yubei Chen
British Machine Vision Conference (BMVC), 2025
We developed a pipeline and dataset for benchmarking multi-concept personalized text-to-image diffusion models.
Jigsaw: Supporting Designers to Prototype Multimodal Applications by Chaining AI Foundation Models
David Chuan-En Lin, Nikolas Martelaro
ACM Conference on Human Factors in Computing Systems (CHI), 2024
We developed a tool for combining AI models across different capabilities and modalities by combining them like puzzle pieces.
VideoMap: Supporting Video Editing Exploration, Brainstorming, and Prototyping in the Latent Space
David Chuan-En Lin, Fabian Caba Heilbron, Joon-Young Lee, Oliver Wang, Nikolas Martelaro
ACM Creativity and Cognition (C&C), 2024
NeurIPS Machine Learning for Creativity and Design, 2022
We developed a proof-of-concept video editing interface that operates on video frames projected onto a latent space.
Videogenic: Identifying Highlight Moments in Videos with Professional Photographs as a Prior
David Chuan-En Lin, Fabian Caba Heilbron, Joon-Young Lee, Oliver Wang, Nikolas Martelaro
ACM Creativity and Cognition (C&C), 2024
NeurIPS Machine Learning for Creativity and Design, 2022
We developed a system for detecting highlight moments by leveraging photographs taken by photographers.
Soundify: Matching Sound Effects to Video
David Chuan-En Lin, Anastasis Germanidis, Cristóbal Valenzuela, Yining Shi, Nikolas Martelaro
ACM Symposium on User Interface Software and Technology (UIST), 2023
NeurIPS Machine Learning for Creativity and Design, 2021
We developed a system to assist video editors in adding content-aware spatial sound effects to video.
ARchitect: Building Interactive Virtual Experiences from Physical Affordances by Bringing Human-in-the-Loop
Chuan-En Lin*, Ta Ying Cheng*, Xiaojuan Ma(* = equal contribution)
ACM Conference on Human Factors in Computing Systems (CHI), 2020
We explored an asymmetric workflow of an AR builder and a VR player for creating VR experiences that incorporate real-world interaction affordances.
SeqDynamics: Visual Analytics for Evaluating Online Problem-solving Dynamics
Meng Xia, Min Xu, Chuan-En Lin, Ta Ying Cheng, Huamin Qu, Xiaojuan Ma
Eurographics Conference on Visualization (EuroVis), 2020
We developed an interactive visual analytics system for instructors to evaluate problem-solving dynamics of student learners.
Learning to Film from Professional Human Motion Videos
Chong Huang, Chuan-En Lin, Zhenyu Yang, Yan Kong, Peng Chen, Xin Yang, Kwang-Ting Cheng
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
We developed an automatic drone cinematography system by learning from cinematic drone videos captured by professionals.