| CARVIEW |
about me
I am a principal researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) group at Microsoft Research Montréal. I’m broadly interested in examining the social and ethical implications of natural language processing technologies; I develop approaches for anticipating, measuring, and mitigating harms arising from language technologies, focusing on the complexities of language and language technologies in their social contexts, and on supporting NLP practitioners in their ethical work. I’ve also worked on using NLP approaches to examine language variation and change (computational sociolinguistics), for example developing models to identify language variation on social media. I was named one of the 2022 100 Brilliant Women in AI Ethics.
I was previously a postdoctoral researcher at MSR Montréal. I completed my Ph.D. in computer science at the University of Massachusetts Amherst working in the Statistical Social Language Analysis Lab under the guidance of Brendan O’Connor, where I was also supported by the NSF Graduate Research Fellowship. I received my B.A. in mathematics from Wellesley College. I interned at Microsoft Research New York in summer 2019, where I worked with Solon Barocas, Hal Daumé III, and Hanna Wallach.
recent news
Sept. 2025: Our position paper arguing that AI research and practice needs a broader conception of rigor has been accepted to NeurIPS 2025.
Sept. 2025: I am now a principal researcher!
June 2025: I’m a General Chair for FAccT 2026, to be held in Montréal. Submit your work!
May 2025: Our paper on interventions for mitigating anthropomorphic AI system behaviors, led by Myra Cheng, has been accepted to ACL 2025 (with an SAC Highlights Award!), and another on gaps between research and practice when measuring representational harms, led by Emma Harvey, accepted to Findings of ACL.
May 2025: Our position paper arguing that evaluating generative AI systems is a social science measurement challenge has been accepted to ICML 2025.
Apr. 2025: I gave keynotes at the Human-centered Evaluation and Auditing of Language Models Workshop (CHI 2025) and Workshop on Noisy and User-generated Text (NAACL 2025).
Apr. 2025: Our paper examining people’s experiences of machine learning errors reflecting stereotypes has been accepted to FAccT 2025.
Jan. 2025: Our paper on linguistic expressions that contribute to anthropomorphism, led by Alicia DeVrio, has been accepted to CHI 2025, and a blog post at ICLR 2025.
Dec. 2024: I gave a keynote and participated on two panels at the International Web Information Systems Engineering Conference (WISE) 2024.
Nov. 2024: The fourth edition of our workshop bridging HCI and NLP will take place at EMNLP 2025 (co-organized with Amanda Cercas Curry, Sunipa Dev, Siyan Li, Michael Madaio, Jack Wang, Sherry Wu, Ziang Xiao, and Diyi Yang)!
Oct. 2024: Two tiny papers accepted at the EvalEval Workshop: one arguing that evaluating generative AI systems is a social science measurement challenge, and another, led by Emma Harvey, on gaps between research and practice when measuring representational harms. I was also on a panel reflecting on the AI evaluation landscape.
Sept. 2024: Our paper on writers’ and readers’ conceptions of and experiences with authenticity in AI-assisted writing, led by Angel Hsing-Chi Hwang, has been accepted to CSCW 2025.
May 2024: One paper accepted to ACL 2024 contributing a framework formalizing the benchmark design process, led by Yu Lu Liu, and another to Findings of ACL on impacts of language technologies’ disparities on African American Language speakers, led by Jay Cunningham.
Mar. 2024: Two papers accepted to NAACL 2024: one examining expectations around what constitute fair or good NLG system behaviors, led by Lucy Li, and the other examining the shifting landscape of practices and assumptions around disagreement in data labeling.
Nov. 2023: I was a guest speaker at the Gender & Tech event, hosted by the University of Cambridge Centre for Gender Studies to celebrate The Good Robot Podcast and the launch of a volume on feminist AI!
Oct. 2023: Our paper on responsible AI practices in text summarization research, led by Yu Lu Liu, has been accepted to Findings of EMNLP 2023.
Oct. 2023: Jackie C.K. Cheung, Vera Liao, Ziang Xiao, and I will be co-organizing a tutorial on human-centered evaluation of language technologies at EMNLP 2024!
Oct. 2023: The third edition of our workshop bridging HCI and NLP will take place at NAACL 2024 (co-organized with Amanda Cercas Curry, Sunipa Dev, Michael Madaio, Ani Nenkova, Ziang Xiao, and Diyi Yang)!
July 2023: I gave a keynote at the Workshop on Online Abuse and Harms (WOAH) at ACL 2023.
June 2023: I gave a keynote at the Workshop on Algorithmic Injustice at the University of Amsterdam, and participated in a panel on algorithmic injustice at SPUI25.
May 2023: One paper accepted to ACL 2023 contributing a dataset for evaluating fairness-related harms in text generation, led by Eve Fleisig, and two more accepted to Findings of ACL: a paper on conceptualizations of NLP tasks and benchmarks led by Arjun Subramonian, and a paper on the landscape of prompt-based measurements of bias.
Nov. 2022: Our paper on representational harms in image tagging has been accepted to AAAI 2023.
June 2022: I gave keynotes at the Second Workshop on Language Technology for Equality, Diversity, and Inclusion and the 1st Workshop on Perspectivist Approaches to NLP.
May 2022: Delighted to be continuing at MSR Montréal as a senior researcher!
May 2022: Honored to have served as ethics co-chair for ACL 2022.
May 2022: Our paper exploring NLG practitioners’ evaluation assumptions and practices, led by Kaitlyn Zhou, has been accepted to NAACL 2022.
May 2022: Vera Liao, Alexandra Olteanu, and I co-organized a CHI panel: “Responsible Language Technologies: Foreseeing and Mitigating Harms”.
Dec. 2021: Honored to have been named one of the 100 Brilliant Women in AI Ethics for 2022.
