| CARVIEW |
DEEM brings together researchers and practitioners at the intersection of applied machine learning, data management and systems research, with the goal to discuss the arising data management issues in ML application scenarios. The DEEM workshop will be held on Friday, June 27th, in conjunction with SIGMOD/PODS 2025. The workshop will be held in-person in Berlin (InterContinental Hotel, Charlottenburg I/II).
The workshop solicits regular research papers (8 pages plus unlimited references) describing preliminary or completed research results, as well as short papers (up to 4 pages) such as reports on applications and tools or preliminary results, interesting use cases, problems, datasets, benchmarks, visionary ideas, and descriptions of system components and tools related to end-to-end ML pipelines. Submissions should follow the guidelines as for SIGMOD, i.e. use the sigconf template for the ACM proceedings format.
Follow us on twitter @deem_workshop, bluesky @deem-workshop.bsky.social, or contact the organizers via email. We also provide archived websites of previous versions of the workshop: DEEM 2017, DEEM 2018, DEEM 2019, DEEM 2020, DEEM 2021, DEEM 2022, DEEM 2023, and DEEM 2024.
DEEM 2025 Proceedings: https://dl.acm.org/doi/proceedings/10.1145/3735654
Following last year program structure, also this year we are going to have poster sessions to spark more discussion and networking with the DEEM audience.
June 27th (all times are in Berlin Time / CEST)
Keynote Abstract: For machine learning on relational data, the current practice relies heavily on wrangling, preparing, cleaning, massaging, torturing data. I claim that this painful situation arises because of a mismatch between the machine learning models and the data, which comprises a mixture of types (strings, dates, numbers) and are spread across multiple tables which must be merged and aggregated before prediction. I will show how we have been rethinking the data preparation pipeline, with the skrub (https://skrub-data.org) software implementing new twists ranging from encoding data types, heuristics applied across columns, to a cross-validating and tuning any sequence operations assembling dataframes. I will also discuss how we have been improving tabular learning, creating more flexible models that apply readily to complex tables. For this, we baking rich priors and knowledge to create table foundation models. Keynote Speaker Bio: Gaël Varoquaux is a research director working on data science at Inria (French computer science national research) where he leads the Soda team. He is also co-founder and scientific advisor of Probabl. Varoquaux's research covers fundamentals of artificial intelligence, statistical learning, natural language processing, causal inference, as well as applications to health, with a current focus on public health and epidemiology. He also creates technology: he co-founded scikit-learn, one of the reference machine-learning toolboxes, and helped build various central tools for data analysis in Python. Varoquaux has worked at UC Berkeley, McGill, and university of Florence. He did a PhD in quantum physics supervised by Alain Aspect and is a graduate from Ecole Normale Superieure, Paris.
Emmanouil Dilmperis (University of Piraeus), Yannis Poulakis (University of Piraeus), Dimitris Petratos (University of Piraeus), Christos Doulkeridis (University of Pireaus)
Melanie Sigl (Universität Erlangen-Nürnberg), Klaus Meyer-Wegener (Universität Erlangen-Nürnberg)
Hao Chen (BIFOLD & TU Berlin), Sebastian Schelter (BIFOLD & TU Berlin)
Mark Gerarts (Hasselt University), Juno Steegmans (Hasselt University), Jan Van den Bussche (Hasselt University)
Kevin Gutjahr (University of Bamberg), Clemens Ruck (University of Bamberg), Maximilian Schüle (University of Bamberg)
Keynote Speaker: Pınar Tözün (ITU)
Keynote Abstract: Deep learning tasks are computationally expensive requiring the use of powerful and expensive hardware accelerators such as GPUs and TPUs. Both the efficiency of the deep learning tasks and effective utilization of the accelerators depend on how fast the relevant data is moved to the accelerator, which still heavily depends on the CPUs. In this talk, we will look into the different aspects of reducing the CPU and data needs of deep learning to improve the end-to-end resource-efficiency of model training. First, we will explore today's landscape for the I/O path to GPUs. Then, we will investigate the impact of the work sharing and data selection on the performance of deep learning model training. Keynote Speaker Bio: Pınar Tözün is an Associate Professor and the Head of Data, Systems, and Robotics Section at IT University of Copenhagen. Before ITU, she was a research staff member at IBM Almaden Research Center. Prior to joining IBM, she received her PhD from EPFL. Her thesis received ACM SIGMOD Jim Gray Doctoral Dissertation Award Honorable Mention in 2016. Her research focuses on resource-aware machine learning, performance characterization of data-intensive systems, and scalability and efficiency of data-intensive systems on modern hardware.
Jonas Schulze (University of Potsdam), Nils Straßenburg (Unviersity of Potsdam), Tilmann Rabl (HPI, University of Potsdam)
Sourish Chatterjee (Intel Labs), Rohit Verma (Intel Labs), Abhinav Kumar (IIT Hyderabad), Arun Raghunath (Intel Labs)
Jiashen Cao (Georgia Tech), Joy Arulraj (Georgia Tech), Hyesoon Kim (Georgia Tech)
Sayed Hoseini (Hochschule Niederrhein), Vincent Hermann (HSNR), Christoph Quix (Fraunhofer FIT)
Francesco Pugnaloni (HPI), Tassilo Klein (SAP SE), Felix Naumann (HPI)
Lampros Flokas (Celonis), Jeffery Cao (Celonis), Yujian Xu (Celonis), Eugene Wu (Columbia University), Xu Chu (Celonis), Cong Yu (Celonis)
Submission website: https://cmt3.research.microsoft.com/DEEM2025
Notification of acceptance: April 25, 2025
Final papers due: May 16, 2025
Workshop: Friday, June 27, 2025
Applying Machine Learning (ML) in real-world scenarios is a challenging task. In recent years, the main focus of the data management community has been on creating systems and abstractions for the efficient training of ML models on large datasets. However, model training is only one of many steps in an end-to-end ML application, and a number of orthogonal data management problems arise from the large-scale use of ML and increased adoption large language models (LLMs).
For example, data preprocessing and feature extraction workloads may be complicated and require simultaneous execution of relational and linear algebraic operations. Next, model selection may involve searching many combinations of model architectures, features, and hyper-parameters to find the best-performing model. After model training, the resulting model may have to be deployed and integrated into business workflows and require lifecycle management using metadata and lineage. As a further complication, the resulting system may have to take into account a heterogeneous audience, ranging from domain experts without programming skills to data engineers and statisticians who develop custom algorithms. Many such challenges are human or engineer-centered (e.g., monitoring ML pipelines, leveraging LLMs for domain-specific tasks at scale), and DEEM uniquely encourages submissions in such topics.
Additionally, the importance of incorporating ethics and legal compliance into machine-assisted decision-making is being broadly recognized. Critical opportunities for improving data quality and representativeness, controlling for bias, and allowing humans to oversee and impact computational processes are missed if we do not consider the lifecycle stages upstream from model training and deployment. DEEM welcomes research on providing system-level support to data scientists who wish to develop and deploy responsible machine learning methods.
DEEM aims to bring together researchers and practitioners at the intersection of applied machine learning, data management and systems research, with the goal to discuss the arising data management issues in ML application scenarios.
- Data Management in Machine Learning Applications
- Definition, Execution and Optimization of Complex Machine Learning Pipelines
- Systems for ML, e.g. for Managing the Lifecycle of ML Models, Efficient Hyper-parameter Search, or Feature Selection
- Machine Learning Services in the Cloud
- Modeling, Storage and Provenance of Machine Learning Artifacts
- Integration of Machine Learning and Dataflow Systems
- Integration of Machine Learning and ETL Processing
- Definition and Execution of Complex Ensemble Predictors
- Sourcing, Labeling, Integrating, and Cleaning Data for Machine Learning
- MLOps, Data Validation, and Model Debugging Techniques
- Privacy-preserving Machine Learning
- Benchmarking of Machine Learning Applications
- Responsible Data Management
- Transparency and Accountability of Machine-Assisted Decision Making
- Impact of Data Quality and Data Preprocessing on the Fairness of ML Predictions
- Horror stories, Anecdotes, and Lessons Learned on data management for ML
- Data management for multimodal ML
- Vector Databases for Retrieval and Systems for Retrieval Augmented Generation
- ML for data management for ML
- Data Management challenges, e.g. responsible data management, for LLMs
We invite submissions in the following two tracks:
- Regular Papers (research and industrial papers; up to 8, plus unlimited references)
- Short Papers (preliminary results, interesting use cases, problems, datasets, benchmarks, visionary ideas, system designs, and descriptions of system components and tools; up to 4 pages)
Authors are requested to prepare submissions following the ACM proceedings format consistent with the SIGMOD submission guidelines. Please use the latest ACM paper format with the sigconf template. DEEM is a single-anonymous workshop, authors must include their names and affiliations on the manuscript cover page. Submission website: https://cmt3.research.microsoft.com/DEEM2025
Inclusion and Diversity in Writing: https://2025.sigmod.org/calls_papers_inclusion_and_diversity.shtml
Madelon HulsebosCWI, Netherlands
Matteo InterlandiMicrosoft GSL, USA
Shreya ShankarUC Berkeley, USA
Stefan GrafbergerBIFOLD & TU Berlin, Germany
Steering Committee:
- Juliana Freire (New York University)
- Bill Howe (University of Washington)
- H.V. Jagadish (University of Michigan)
- Volker Markl (TU Berlin)
- Stefan Seufert (Amazon Research)
- Markus Weimer (Microsoft AI)
- Sebastian Schelter (BIFOLD & TU Berlin)
- Anna Pavlenko, Microsoft Gray Systems Lab
- Bojan Karlaš, Harvard University
- Gerardo Vitagliano, MIT CSAIL
- Haralampos Gavriilidis, TU Berlin
- Jacopo Tagliabue, Bauplan
- Joy Arulraj, Georgia Tech
- Konstantinos Kanellis, University of Wisconsin-Madison
- Manisha Luthra, TU Darmstadt and DFKI
- Matthias Boehm, Technische Universität Berlin
- Maximilian Böther, ETH Zurich
- Maximilian Schüle, University of Bamberg
- Pinar Tozun, IT University of Copenhagen
- Rainer Gemulla, Universität Mannheim
- Sebastian Schelter, BIFOLD & TU Berlin
- Shreya Shankar, University of California Berkeley
- Sivaprasad Sudhir, MIT
- Ties Robroek, IT University of Copenhagen
- Till Döhmen, MotherDuck
- Xue Li, CWI
- Yiming Lin, University of California, Berkeley
- Zezhou Huang, Columbia University
