I am also an organiser of FridayTalks@Tübingen, a bi-weekly AI research seminar aiming to provide a place for the Tübingen AI research community to exchange ideas.
Previously, I did a MRes Computational Statistics and Machine Learning at UCL with David Barber and a MSc in Computer Science at the University of Oxford with Yarin Gal. Prior to that I studied computer science at the University of Manchester for my undergraduate. At the time, I also had a one year industrial placement with Morgan Stanley, during which I worked with a global team to create tools for managing and monitoring the cloud platform with a huge number of servers located around the world.
Just like many computer scientists, I have faith in computing. Although computation will not solve all problems for you, I believe it will certainly take you closer to the answers.
I am honoured to be invited to give the scholar scientific talk at this year’s IMPRS-IS Interview Symposium! The topic is Verbalized Machine Learning, a similar talk can be found here.
@article{xiao2025large,title={Large Language Models Are Zero-Shot Problem Solvers—Just Like Modern Computers},author={Xiao, Tim Z. and Liu, Weiyang and Bamler, Robert},journal={Harvard Data Science Review},number={3},volume={7},year={2025},video={https://youtu.be/ySHuahiazD4},}
Flipping Against All Odds: Reducing LLM Coin Flip Bias via Verbalized Rejection Sampling
Tim Z. Xiao, Johannes Zenn, Zhen Liu, Weiyang Liu, Robert Bamler, and Bernhard Schölkopf
@article{xiao2025flipping,title={Flipping Against All Odds: Reducing LLM Coin Flip Bias via Verbalized Rejection Sampling},author={Xiao, Tim Z. and Zenn, Johannes and Liu, Zhen and Liu, Weiyang and Bamler, Robert and Schölkopf, Bernhard},journal={arXiv preprint arXiv:2506.09998},year={2025},}
Verbalized Machine Learning: Revisiting Machine Learning with Language Models
Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, and Weiyang Liu
In Transactions on Machine Learning Research (TMLR), 2025
ICML 2024 Workshop on In-context Learning ICML 2024 Workshop on LLMs and Cognition
@inproceedings{xiao2025verbalized,title={Verbalized Machine Learning: Revisiting Machine Learning with Language Models},author={Xiao, Tim Z. and Bamler, Robert and Schölkopf, Bernhard and Liu, Weiyang},booktitle={Transactions on Machine Learning Research (TMLR)},video={https://www.youtube.com/watch?v=LCl_np5oPWA},year={2025},}
A Note on Generalization in Variational Autoencoders: How Effective Is Synthetic Data and Overparameterization?
Tim Z. Xiao*, Johannes Zenn*, and Robert Bamler
In Transactions on Machine Learning Research (TMLR), 2024
@inproceedings{xiao2024a,title={A Note on Generalization in Variational Autoencoders: How Effective Is Synthetic Data and Overparameterization?},author={Xiao, Tim Z. and Zenn, Johannes and Bamler, Robert},booktitle={Transactions on Machine Learning Research (TMLR)},year={2024},}
A Compact Representation for Bayesian Neural Networks By Removing Permutation Symmetry
Tim Z. Xiao, Weiyang Liu, and Robert Bamler
arXiv preprint arXiv:2401.00611, 2023
NeurIPS 2023 Workshop on Unifying Representations in Neural Models
@article{xiao2023compact,title={A Compact Representation for Bayesian Neural Networks By Removing Permutation Symmetry},author={Xiao, Tim Z. and Liu, Weiyang and Bamler, Robert},journal={arXiv preprint arXiv:2401.00611},year={2023},}
The SVHN Dataset Is Deceptive for Probabilistic Generative Models Due to a Distribution Mismatch
@article{xiao2023the,title={The SVHN Dataset Is Deceptive for Probabilistic Generative Models Due to a Distribution Mismatch},author={Xiao, Tim Z. and Zenn, Johannes and Bamler, Robert},journal={arXiv preprint arXiv:2312.02168},year={2023},}
Trading Information between Latents in Hierarchical Variational Autoencoders
Tim Z. Xiao, and Robert Bamler
In International Conference on Learning Representations (ICLR), 2023
@inproceedings{xiao2023trading,title={Trading Information between Latents in Hierarchical Variational Autoencoders},author={Xiao, Tim Z. and Bamler, Robert},booktitle={International Conference on Learning Representations (ICLR)},year={2023},}
Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers
Tim Z. Xiao, Aidan N. Gomez, and Yarin Gal
arXiv preprint arXiv:2006.08344, 2020
Spotlight talk, NeurIPS 2019 Workshop on Bayesian Deep Learning
My master project at Oxford. In the project, we detect out-of-training-distribution sentences in Neural Machine Translation using the Bayesian Deep Learning equivalent of Transformer models. For this we develop a new measure of uncertainty designed specifically for long sequences of discrete random variables – i.e. words in the output sentence. Our new measure of uncertainty solves a major intractability in the naive application of existing approaches on long sentences. We use our new measure on a Transformer model trained with dropout approximate inference. On the task of German-English translation using WMT13 and Europarl, we show that with dropout uncertainty our measure is able to identify when Dutch source sentences, sentences which use the same word types as German, are given to the model instead of German.
@article{xiao2020wat,title={Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers},author={Xiao, Tim Z. and Gomez, Aidan N. and Gal, Yarin},journal={arXiv preprint arXiv:2006.08344},year={2020},}
If you want to know more about me, you can always buy me a coffee ;)