| CARVIEW |
ICBINB: Crack open the research process
News
[Dec/2025] The next ICBINB workshop takes place at ICLR 2026! This year's theme: Where large language models need to improve.
[Dec/2024] We are organizing the next ICBINB workshop at ICLR 2025! We will take a deep into the pitfalls and challenges of applied deep learning.
[Jun/2023] The ICBINB workshop will be back at NeurIPS 2023! This time with a focus on failure modes of foundation models.
[Jan/2023] The talks of our NeurIPS 2022 workshop are now online. Watch them here!
What is ICBINB?
The ICBINB initiative is a movement within the ML community for well-executed meaningful
research beyond bold numbers. The goals of the initiative are to crack open the research process, to re-value
unexpected negative results, question well-established default practices, and advance the understanding,
elegance, and diversity of the field as opposed to focusing solely on the outcome and just rewarding approaches
that beat previous works on a given benchmark.
Objectives
The three pillars of such an initiative include:
- Shed light on the research process: sharing stories about how research is really done behind the curtains, encouraging transparency and reproducibility.
- Provide a platform for high-quality but under-valued ML research: showcase, disseminate, and support valuable work that is currently under-represented given publication incentives. This includes negative results/failed attempts, simple approaches that work well in practice, and applied work.
- Develop an inclusive and welcoming community of researchers and practitioners that share the same values to support and help each other to conduct deep high-impact research. Encourage meta-dialog on how we should be conducting top-tier ML research.
Who we are
Here our wonderful team of volunteers! None of this would be possible without their help.
Team members
Aaron Schein
Columbia University
Arno Blaas
Apple
Andreas Kriegler
Technical University of Vienna
David Rohde
Criteo AI Lab
Fan Feng
City University of Hong Kong
Francisco J.R. Ruiz
DeepMind
Ian Mason
Fujitsu Research
Javier Altoran
University of Cambridge
Jessica Forde
Brown University
Luca Zapella
Apple
Kelly Buchanan
Columbia University
Manuel Haussmann
University of Southern Denmark
Melanie F. Pradier
Microsoft Research
Nicola Branchini
University of Edinburgh
Nikolai Rozanov
Imperial College London
Priya D'Costa
SAP
Rui Yang
Cornell University
Sahra Ghalebikesabi
University of Oxford
Sonali Parbhoo
Imperial College London
Stephanie Hyland
Microsoft Research
Tobias Uelwer
Microsoft
Vincent Fortuin
University of Cambridge
Wenbin Zhang
Carnegie Mellon University
Yubin Xie
Cornell University/MSKCC
Zhaoying Pan
Purdue University
Advisors
David Blei
Columbia University
Max Welling
Amsterdam University & MSR
Robert Williamson
Tübingen University
Tamara Broderick
MIT
Former advisors
Hanna Wallach
Microsoft Research
Isabel Valera
Saarland University
What we believe
- Process over outcome
- Deep understanding: of experimental results
- Intellectual and methodological transparency
- Depth over breadth
- Collaboration and peer support