| CARVIEW |
FORC 2023 Program
Berg Hall, 2nd floor, Li Ka Shing Building.
Wednesday, June 7, 2023
8:00-9:15: Breakfast, Coffee, Nametags
9:15-10:15: Keynote: Sebastien Bubeck
Title: First Contact
Abstract: The new wave of AI systems, ChatGPT and its more powerful successors, exhibit extraordinary capabilities across a broad swath of domains. In light of this, we will discuss whether artificial INTELLIGENCE has arrived.
Bio: Sebastien Bubeck is a Senior Principal Research Manager in the Machine Learning Foundations group at Microsoft Research (MSR). He joined the Theory Group at MSR in 2014, after three years as an assistant professor at Princeton University. His works on convex optimization, online algorithms and adversarial robustness in machine learning received several best paper awards (STOC 2023, NeurIPS 2018 and 2021 best paper, ALT 2018 and 2023 best student paper in joint work with MSR interns, COLT 2016 best paper, and COLT 2009 best student paper). Recently he has been focused on exploring a physics-like theory of neural networks learning.
10:15-10:45: Coffee Break
10:45-12:15: Session 1
Session Chair: Kunal Talwar
Concurrent Composition Theorems for Differential Privacy. Arxiv-1, Arxiv-2.
Xin Lyu, Salil Vadhan and Wanrong Zhang
The Price of Differential Privacy under Continual Observation. Arxiv.
Palak Jain, Sofya Raskhodnikova, Satchit Sivakumar and Adam Smith.
Differentially Private Aggregation via Imperfect Shuffling. pdf.
Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Jelani Nelson and Samson Zhou.
Node-Differentially Private Estimation of the Number of Connected Components. Arxiv.
Iden Kalemaj, Sofya Raskhodnikova, Adam Smith and Charalampos Tsourakakis.
12:15-2:00: Lunch (provided)
2:00-3:10: Session 2
Session Chair: Jessica Sorrell
On-Demand Sampling: Learning Optimally from Multiple Distributions. Arxiv.
Nika Haghtalab, Michael Jordan and Eric Zhao.
Distributionally Robust Data Join. Arxiv. Proceedings.
Pranjal Awasthi, Christopher Jung and Jamie Morgenstern.
Diagnosing Model Performance Under Distribution Shift. Arxiv.
Tiffany Cai, Hongseok Namkoong and Steve Yadlowsky.
3:10-3:50: Coffee Break
3:50-5:00: Session 3
Session Chair: Parikshit Gopalan
Multiplicative Metric Fairness Under Composition. Proceedings.
Milan Mossé.
Group fairness in dynamic refugee assignment. Arxiv.
Daniel Freund, Thodoris Lykouris, Elisabeth Paulson, Bradley Sturt and Wentao Weng.
Multicalibration as Boosting for Regression. Arxiv.
Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth and Jessica Sorrell.
Thursday, June 8, 2023
8:00-9:15: Breakfast, Coffee, Nametags
9:15-10:15: Keynote: Kristian Lum
Title: Defining, Measuring, and Addressing Algorithmic Amplification
Abstract: As people consume more content delivered by recommender systems, it has become increasingly important to understand how content is amplified by these recommendations. Much of the recent work to study algorithmic amplification implicitly assumes that the algorithm is a single machine learning model acting on an immutable corpus of content to be recommended. Additionally, there is an inherent assumption of a neutral “non-algorithmic” baseline against which to compare. In actuality, there are several other components of the system that are not traditionally considered part of the algorithm that influence what ends up on a user’s content feed and potentially corrupt the neutrality of any baseline measurement: upstream editorial policies or decisions that determine what content is eligible to be ranked by the algorithmic recommender system, including NSFW and toxicity filtering; peripheral models that shape the evolution of the social graph, such as account recommendation models; and explicit user preferences and behaviors. All of these components affect what ultimately gets amplified and can ultimately confound how we measure amplification.
Our proposed paper has three aims. First, we will enumerate some of these components that influence algorithmic amplification. Second, we will explore how the assumption of a “neutral” baseline that was not shaped by prior behavior of these components, particularly the “reverse chronological” content feed, can lead to poor measurement of amplification. Third, we will suggest some paths forward for measurement and mitigation that address the same concerns that underlie the recent discourse around algorithmic amplification but do not rely on the existence of a neutral baseline.
Bio: Kristian Lum is an Associate Research Professor at the University of Chicago Data Science Institute specializing in statistics, machine learning, and computational social science. She holds a Ph.D. in Statistics from Duke University and earned her undergraduate degree from Rice University. Kristian’s expertise lies in advancing fairness and transparency in predictive algorithms, particularly within criminal justice applications. She was a former Research Lead for Twitter’s ML Ethics, Transparency, and Accountability where she led efforts to operationalize and develop metrics for the Responsible ML initiative. She was also the former Lead Statistician at the Human Rights Data Analysis Group, where she developed new statistical techniques for her statistical analysis supported advocacy efforts around the uses of algorithms in criminal justice. Her research has been widely published, and she is a sought-after speaker known for effectively communicating complex concepts to diverse audiences. Through her commitment to social justice and responsible data science, Kristian Lum is making significant contributions to shaping the field and promoting equitable decision-making systems.
10:15-10:45: Coffee Break
10:45-12:15: Session 4
Session Chair: Kunal Talwar
From the Real Towards the Ideal: Risk Prediction in a Better World. Proceedings.
Cynthia Dwork, Omer Reingold and Guy Rothblum.
New Algorithms and Applications for Risk-Limiting Audits. Proceedings.
Bar Karov and Moni Naor.
Fair Grading Algorithms for Randomized Exams. Arxiv. Proceedings.
Jiale Chen, Jason Hartline and Onno Zoeter.
Resistance to Timing Attacks for Sampling and Privacy Preserving Schemes. Proceedings.
Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik.
12:15-2:00: Lunch (provided)
2:00-3:10: Session 5
Session Chair: Chara Podimata
Fair Correlation Clustering in Forests. Arxiv. Proceedings.
Katrin Casel, Tobias Friedrich, Martin Schirneck and Simon Wietheger.
Recommending to Strategic Users. Arxiv.
Andreas Haupt, Dylan Hadfield-Menell and Chara Podimata.
Accounting for Stakes in Democratic Decisions. pdf.
Bailey Flanigan, Ariel Procaccia and Sven Wang.
3:10-3:50: Coffee Break
3:50:5:00 Poster Session for all papers (optional)
Friday, June 9, 2023
8:00-9:15: Breakfast, Coffee, Nametags
9:15-10:15: Keynote: Kobbi Nissim. Slides PDF
Title: Can we Reconcile the Computer Science and Legal Views of Privacy?
Abstract: Law and computer science interact in critical ways within sociotechnical systems, and recognition is growing among computer scientists, legal scholars, and practitioners of significant gaps between these disciplines that create potential risks for privacy and data protection. These gaps need to be bridged to ensure that computer systems are designed and implemented to correctly address applicable legal requirements and that interpretations of legal concepts accurately reflect the capabilities and limitations of technical systems. We will explore some of the gaps between the legal and technical views of privacy and suggest directions by which these gaps may be reconciled.
Bio: Kobbi Nissim is the McDevitt Chair in Computer Science at Georgetown University and an affiliate professor at Georgetown Law. His work from 2003 and 2004 with Dinur and Dwork initiated rigorous foundational research of privacy and in 2006 he introduced Differential Privacy with Dwork, McSherry and Smith. Nissim was awarded the Paris Kanellakis Theory and Practice Award in 2021, the Godel Prize In 2017, and PODS and TCC Test of Time Award in 2013, 2016 and 2018. He studied at the Weizmann Institute with Prof. Moni Naor.
10:15-10:45: Coffee Break
10:45-12:15: Session 6
Session Chair: Michael P. Kim
Screening with Disadvantaged Agents. Arxiv.
Hedyeh Beyhaghi, Modibo Camara, Jason Hartline, Aleck Johnsen and Sheng Long.
Setting Fair Incentives to Maximize Improvement. Proceedings.
Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum and Keziah Naggita.
Bidding Strategies for Proportional Representation in Advertisement Campaigns. Arxiv. Proceedings.
Inbal Livni Navon, Charlotte Peale, Omer Reingold and Judy Hanwen Shen.
An Algorithmic Approach to Address Course Enrollment Challenges. Arxiv. Proceedings.
Arpita Biswas, Yiduo Ke, Samir Khuller and Quanquan C. Liu.
12:15-2:00: Lunch (provided)
2:00-3:10: Session 7
Session Chair: Badih Ghazi
Stability is Stable: Connections between Replicability, Privacy, and Adaptive Generalization. Arxiv.
Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, Satchit Sivakumar and Jessica Sorrell.
From Robustness to Privacy and Back. Arxiv.
Hilal Asi, Jonathan Ullman and Lydia Zakynthinou.
Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions. Arxiv.
Gavin Brown, Samuel Hopkins and Adam Smith.
3:10-3:50: Coffee Break
3:50-5:00: Session 8
Session Chair: Ilya Mironov
Control, Confidentiality, and the Right to be Forgotten. Arxiv.
Aloni Cohen, Adam Smith, Marika Swanberg and Prashan Nalini Vasudevan.
Forget Unlearning: Towards True Data-Deletion in Machine Learning. Arxiv.
Rishav Chourasia and Neil Shah.
Ticketed Learning–Unlearning Schemes. PDF
Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Ayush Sekhari and Chiyuan Zhang.
