Research  I am interested in the connection between Deep Learning and Probabilistic Modelling. My current research lies in creating more expressive generative models, increasing their robustness and developing better inference methods. My work has been applied to several fields, like Neuroscience, Physics or Psychiatry.
📣 The Andaluz.IACall for Papers is now open! Share your work with the Andalusian AI community in beautiful Sevilla. Submission Deadline October 20, 2025.
Jul 15, 2025
Hi Vancouver! 🇨🇦 We are presenting our paper at ICML 2025! Come to see our poster and chat with us at East Exhibition Hall, #3112 on Wednesday 16 Jul at 16:30pm-19:00pm! We will explain to you how to generate complex functions using efficient Transformer-based Hypernetworks and Latent Diffusion! [paper][poster]
Jun 04, 2025
I gave a talk about our ICML25 paper “Hyper-Transforming Latent Diffusion Models” at the Section for Cognitive Systems in DTU, Copenhagen, Denmark. Find here the slides.
Jun 01, 2025
Great news! Our paper “Hyper-Transforming Latent Diffusion Models” got accepted at ICML 2025! We’ll meet you at Vancouver, Canada! 🇨🇦 More info soon. [paper]
May 16, 2025
I am co-organizing the next Andaluz.IA forum, which will be held onsite on Friday 19th December 2025 at the Higher Technical School of Engineering (ETSI), Universidad de Sevilla. This unique meeting gathers multidisciplinary researchers in/from Andalusia working in the field of AI, Machine Learning and Deep Learning, and shows the potential of the community to become the AI hub in Southern Europe.
We introduce a novel generative framework for functions by integrating Implicit Neural Representations (INRs) and Transformer-based hypernetworks into latent variable models. Unlike prior approaches that rely on MLP-based hypernetworks with scalability limitations, our method employs a Transformer-based decoder to generate INR parameters from latent variables, addressing both representation capacity and computational efficiency. Our framework extends latent diffusion models (LDMs) to INR generation by replacing standard decoders with a Transformer-based hypernetwork, which can be trained either from scratch or via hyper-transforming—a strategy that fine-tunes only the decoder while freezing the pre-trained latent space. This enables efficient adaptation of existing generative models to INR-based representations without requiring full retraining. We validate our approach across multiple modalities, demonstrating improved scalability, expressiveness, and generalization over existing INR-based generative models. Our findings establish a unified and flexible framework for learning structured function representations.
@inproceedings{peis2025hyper,title={Hyper-Transforming Latent Diffusion Models},author={Peis, Ignacio and Koyuncu, Batuhan and Valera, Isabel and Frellsen, Jes},booktitle={Proceedings of the 42nd International Conference on Machine Learning},pages={48714--48733},year={2025},editor={Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry},volume={267},series={Proceedings of Machine Learning Research},month={13--19 Jul},publisher={PMLR},url={https://proceedings.mlr.press/v267/peis25a.html},bibtex_show=true,selected=true}
Recent approaches build on implicit neural representations (INRs) to propose generative models over function spaces. However, they are computationally intensive when dealing with inference tasks, such as missing data imputation, or directly cannot tackle them. In this work, we propose a novel deep generative model, named VAMoH. VAMoH combines the capabilities of modeling continuous functions using INRs and the inference capabilities of Variational Autoencoders (VAEs). In addition, VAMoH relies on a normalizing flow to define the prior, and a mixture of hypernetworks to parametrize the data log-likelihood. This gives VAMoH a high expressive capability and interpretability. Through experiments on a diverse range of data types, such as images, voxels, and climate data, we show that VAMoH can effectively learn rich distributions over continuous functions. Furthermore, it can perform inference-related tasks, such as conditional super-resolution generation and in-painting, as well or better than previous approaches, while being less computationally demanding.
@inproceedings{koyuncu2023variational,title={Variational Mixture of HyperGenerators for Learning Distributions Over Functions},author={Koyuncu, Batuhan and Sanchez-Martin, Pablo and Peis, Ignacio and Olmos, Pablo M. and Valera, Isabel},booktitle={Proceedings of the 40th <b>International Conference on Machine Learning</b>},year={2023},selected=true,bibtex_show=true}
Variational Autoencoders (VAEs) have recently been highly successful at imputing and acquiring heterogeneous missing data. However, within this specific application domain, existing VAE methods are restricted by using only one layer of latent variables and strictly Gaussian posterior approximations. To address these limitations, we present HH-VAEM, a Hierarchical VAE model for mixed-type incomplete data that uses Hamiltonian Monte Carlo with automatic hyper-parameter tuning for improved approximate inference. Our experiments show that HH-VAEM outperforms existing baselines in the tasks of missing data imputation and supervised learning with missing features. Finally, we also present a sampling-based approach for efficiently computing the information gain when missing features are to be acquired with HH-VAEM. Our experiments show that this sampling-based approach is superior to alternatives based on Gaussian approximations.
@inproceedings{peis2022missing,title={Missing Data Imputation and Acquisition with Deep Hierarchical Models and Hamiltonian Monte Carlo},author={Peis, Ignacio and Ma, Chao and Hern{\'a}ndez-Lobato, Jos{\'e} Miguel},booktitle={Advances in <b>Neural Information Processing Systems</b> 35},year={2022},selected=true,bibtex_show=true}
# Do not hesitate in writing an email if you want to know more about my research!