Hi, I’m Kowe (co-weh)! I am currently seeking full-time opportunities in industry as a Research Scientist, UX Researcher, or Data Scientist in Responsible AI! As a final year PhD candidate at Cornell Tech in the Social Technologies Lab, I develop evaluation frameworks to characterize new harms in AI systems and design interventions to create safe, trustworthy, and inclusive AI. In my work, I use mixed methods approaches from human-computer interaction (HCI) and evaluation methods from natural language processing (NLP). Broadly speaking, my research falls under responsible AI (RAI). My work has been supported by The National GEM Consortium, Cornell’s Digital Life Initiative, and LinkedIn. I am also a member in Cornell’s AI Policy and Practice group and CTRL-ALT group.
Previously, I received a B.S., summa cum laude, in Computer Engineering from Florida A&M University. As an undergrad, I had a myriad of research experiences from biomedical engineering to ethics. I have worked on several patent matters at law firms and in-house.
News
Jul 18, 2025
New paper accepted at AIES
Jun 23, 2025
Attended FAccT Doctoral Consortium
Apr 1, 2025
Two CHI acceptances and one Honorable Mention (top 5% of submissions)
Selected Papers
Generative AI and Perceptual Harms: Who’s Suspected of using LLMs?
Kowe Kadoma, Danaé Metaxa, and Mor Naaman
In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems 2025
Large language models (LLMs) are increasingly integrated into a variety of writing tasks. While these tools can help people by generating ideas or producing higher quality work, like many other AI tools, they may risk causing a variety of harms, potentially disproportionately burdening historically marginalized groups. In this work, we introduce and evaluate perceptual harms, a term for the harms caused to users when others perceive or suspect them of using AI. We examined perceptual harms in three online experiments, each of which entailed participants evaluating write-ups from mock freelance writers. We asked participants to state whether they suspected the freelancers of using AI, to rank the quality of their writing, and to evaluate whether they should be hired. We found some support for perceptual harms against certain demographic groups. At the same time, perceptions of AI use negatively impacted writing evaluations and hiring outcomes across the board.
The Role of Inclusion, Control, and Ownership in Workplace AI-Mediated Communication
Kowe Kadoma, Marianne Aubin Le Quere, Xiyu Jenny Fu, and 3 more authors
In Proceedings of the CHI Conference on Human Factors in Computing Systems 2024
Given large language models’ (LLMs) increasing integration into workplace software, it is important to examine how biases in the models may impact workers. For example, stylistic biases in the language suggested by LLMs may cause feelings of alienation and result in increased labor for individuals or groups whose style does not match. We examine how such writer-style bias impacts inclusion, control, and ownership over the work when co-writing with LLMs. In an online experiment, participants wrote hypothetical job promotion requests using either hesitant or self-assured auto-complete suggestions from an LLM and reported their subsequent perceptions. We found that the style of the AI model did not impact perceived inclusion. However, individuals with higher perceived inclusion did perceive greater agency and ownership, an effect more strongly impacting participants of minoritized genders. Feelings of inclusion mitigated a loss of control and agency when accepting more AI suggestions.