| CARVIEW |
Embedding Ethics in Computer Science
This site provides access to curricular materials created by the Embedded Ethics team at Stanford for undergraduate computer science courses. The materials are designed to expose students to ethical issues that are relevant to the technical content of CS courses, to provide students structured opportunities to engage in ethical reflection through lectures, problem sets, and assignments, and to build the ethical character and habits of young computer scientists.
Artificial Intelligence: Principles and Techniques
The ethics materials cover principles of justice and equality, responsibilities to present and future generations, dual use technologies, and the NeurIPS Code of Ethics. Each assignment in the course includes opportunities for ethical reflection and choice. Our hope is that students will build skills in ethical decision-making at the same time that they are learning artificial intelligence and machine learning principles and techniques, seeing these two competencies as linked responsibilities of engineers.
Design and Analysis of Algorithms
The ethics materials focus on difficulties that computer scientists and engineers might face when trying to apply algorithms to complex, real-world problems: incommensurable values, imperfect proxy measures, threats faced by people whose personal information might be included in or excluded from the data algorithms operate on, problems with idealization and abstraction, and wicked problems like equitable hiring. The materials also explore how philosophical theories of morality or justice (like utilitarianism) can and cannot help us resolve the difficult questions about value that arise when applying algorithms in the real world.
From Language to Information
There are three ethics units in the class: one on sentiment classification, one on information retrieval, and one on large language models. The ethics content is embedded within a three labs that students work on in groups during the scheduled class period. Therefore, the ethics material is a mix of technical problems students work on to practice the material, as well as broader questions they discuss in small groups and with the rest of the class. The topics covered by these three units include: biases introduced by sentiment classifiers, issues of data privacy and data sovereignty related to personalized search engines, and the societal implications of large langauge models.
Human-centered Product Management
This lecture helps students understand how ethics and values are built into products through the theoretical frameworks of the politics of technological artifacts and the Social Construction of Technology (SCOT).
Introduction to Computer Organization & Systems
The ethics materials for this course have two focuses. First, due to the subject matter of the course, there is a natural focus on security topics such as privacy, trust, partiality, and responsible disclosure. These topics dovetail neatly with the content of the course, such as integer overflows, race conditions, penetration testing, and low level security.
Introduction to Game Design
The ethics materials cover toxicity in gaming: what it is, what causes and exacerbates it, and what designers can do about it. In answering these questions, we draw on important and familiar examples from contemporary gaming, from Gamergate to League of Legends.
Introduction to Human-Computer Interaction
These materials highlight ethical questions that arise at different stages of the design process. In particular, it focuses on design discovery, exploration and evaluation. It uses the framework of _values in design_ to enable students to appreciate how values are encoded in design decisions, how value conflicts may emerge in design and how they may be navigated.
Machine Learning
This ethics lecture introduces students to machine learning (ML) as a socio-technical phenomenon, surveys key ethical issues particularly relevant to ML practitioners, explores the complexity of ethical reasoning, and emphasizes the importance of reflective, ethically informed programming practices.
Operating Systems Principles
The ethics materials cover principles of trust in operating systems. Lectures provided a framing of trust as an unquestioning attitude that extends our agency, different ways trust manifests (assumption, inference), technical and socio-technical approaches to partly substitute the need to trust, and contextual factors related to trusting operating systems. Assignments explored case studies related to race conditions, long-term OS support, and file system permissions.
Probability for Computer Scientists
The ethics component introduces students to philosophical and statistical conceptions of fairness and applies these ideas to AI technologies used to predict recidivism in the criminal justice system. Other ethics topics include election forecasting and climate change.
Programming Methodology
The ethics component introduces students to the distinction between descriptive and normative language, the concepts of bias and fairness, the ethical implications of problem formulation, and issues of representation in data. The materials also consider the ethics of image manipulation and generative AI technologies.
Reinforcement Learning
The ethics component addresses the problem of value alignment and gets students to consider the implications of different targets of alignment (e.g., user intentions, user preferences, users' best interests, moral rightness) in the context of LLM chatbots. Students are also introduced to top-down and bottom-up approaches to value alignment.
We could't find any courses that match that combination of CS and ethics topics.
AI and Ethics
Course: Probability
In this module, students are introduced to philosophical and statistical conceptions of fairness including parity and calibration, and apply these ideas to algorithms that have disparate impacts on different protected groups, including AI algorithms that predict criminal recidivism. Along the way, students consider the ethical upshot of AI and probability in other domains, including climate change and election forecasting.
Algorithms in an Imperfect World
• 80 minCourse: Algorithms
In this module, students consider how social problems like homelessness and inequality differ from the sorts of problems they have learned to solve with algorithms. Students are introduced to the concept of a wicked problem, as well as various interpretations of inequality/inequity, and explore how candidate recommendation systems could be built to promote equity in hiring.
Algorithms in the Real World
• 80 minCourse: Algorithms
In this module, students explore barriers to applying comparison-based algorithms in the real world: incommensurable values, imperfect proxy measures, and problems with idealization and abstraction. Students also consider threats faced by people whose personal information might be included in or excluded from the data their algorithms operate on. One version of the lecture uses shortest path algorithms as a running example; an alternative version uses sorting as the example. The associated homework problems are compiled from several different problem sets across multiple versions of the course. Each of these problems includes at least a subpart that asks students to consider what other values might be at stake in the problem, how these values should be measured or approximated, how the collection of the relevant data might impact the people involved, or how idealization and abstraction in problem formulation might lead to real-world harms.
Banking on Security
Course: Intro to Systems
This assignment is about assembly, reverse engineering, security, privacy and trust. An earlier version of the assignment by Randal Bryant & David O'Hallaron (CMU), [accessible here](https://csapp.cs.cmu.edu/public/labs.html), used the framing story that students were defusing a ‘bomb’.
Bits, Bytes, and Overflows
Course: Intro to Systems
The assignment is the first in an introduction to systems course. It covers bits, bytes and overflow, continuing students’ introduction to bitwise and arithmetic operations. Following the saturating arithmetic problem, we added a case study analysis about the Ariane-5 rocket launch failure. This provided students with a vivid illustration of the potential consequences of overflows as well as an opportunity to reflect on their responsibilities as engineers. The starter code is the full project provided to students.
Climate Change & Calculating Risk
Course: Probability
This assignment uses the tools of probability theory to introduce students to _risk weighted expected utility_ models of decision making. The risk weighted expected utility framework is then used to understand decision-making under uncertainty in the context of climate change. Which of the IPCC’s forecasts should we use? Do we owe it to future people to adopt a conservative risk profile when making decisions on their behalf? The assignment also introduces normative principles for allocating responsibility for addressing climate change. Students apply these formal tools and frameworks to understanding the ethical dimensions of climate change.
Concept Video
Course: Intro to HCI
This assignment asks students to consider what values are encoded in their product and the decisions they make in the design process; whether there are conflicting values; and how they address existing value conflicts.
Data Ethics: Choices and Values
Course: Programming Methodology
In this lecture students learn what values are and how they show up in the scientific and engineering process.
Design Discovery and Needfinding
• 10 minCourse: Intro to HCI
The lecture covers topics associated with power relations, the use of language, standpoint and inclusion as they arise in the context of design discovery.
Disparate Impact and Equality of Opportunity
Course: Algorithms
In this module, students are introduced to the distinction between disparate treatment and disparate impact in the context of hiring and protected categories. Then, they apply this distinction in an algorithms context. They also consider the relationship between disparate impact and equality of opportunity.
Ethics in Advanced Technology
Course: AI Principles
After successfully creating a component of a self-driving car – a (virtual) sensor system that tracks other surrounding cars based on noisy sensor readings – students are prompted to reflect on ethical issues related to the creation, deployment, and policy governance of advanced technologies like self-driving cars. Students encounter classic concerns in the ethics of technology such as surveillance, ethics dumping, and dual-use technologies, and apply these concepts to the case of self-driving cars.
Ethics in Computing
Course: Programming Methodology
This lecture introduces students to the distinction between descriptive and normative language, the concepts of bias and fairness, the ethical implications of problem formulation, issues of representation in data, and the ethics of image manipulation.
Ethics of Machine Learning
Course:
This lecture introduces students to machine learning (ML) as a socio-technical phenomenon, surveys key ethical issues particularly relevant to ML practitioners, explores the complexity of ethical reasoning, and emphasizes the importance of reflective, ethically informed programming practices.
Ethics of Products
Course:
This lecture helps students understand how ethics and values are built into products through the theoretical frameworks of the politics of technological artifacts and the Social Construction of Technology (SCOT).
Fairness, Representation, and Machine Learning
Course: Probability
This assignment builds on introductory knowledge of machine learning techniques, namely the naïve Bayes algorithm and logistic regression, to introduce concepts and definitions of algorithmic fairness. Students analyze sources of bias in algorithmic systems, then learn formal definitions of algorithmic fairness such as independence, separation, and fairness through awareness or unawareness. They are also introduced to notions of fairness that complicate the formal paradigms, including intersectionality and subgroup analysis, representation, and justice beyond distribution.
Foundations: Code of Ethics
Course: AI Principles
In Problem 3 of this assignment, “Ethical Issue Spotting,” students explore the ethics of four different real-world scenarios using the ethics guidelines produced by a machine learning research venue, the NeurIPS conference. Students write a potential negative social impacts statement for each scenario, determining if the algorithm violates one of the sixteen guidelines listed in the NeurIPS Ethical Guidelines. In doing so, they practice spotting potential ethical concerns in real-world applications of AI and begin taking on the role of a responsible AI practitioner.
Heuristic Evaluation
Course: Intro to HCI
This assignment asks students to evaluate their peers’ projects through a series of heuristics and to respond to others’ evaluations of their projects. By incorporating ethics questions to this evaluation, we prompt them to consider ethical aspects as part of a product’s design features which should be evaluated alongside other design aspects.
Infinite Story
Course: Programming Methodology
In this assignment, students write a choose-your-own-adventure game that harnesses the power of generative AI. In the end, they are asked to think about the ethical implications of using generative AI in storytelling applications.
Information Retrieval
Course: From Language to Information
Students solve a technical problem to understand how information retrieval systems rank relevant content. They practice computing precision and recall. Then, they are prompted to think about how these systems and metrics can interact with values like privacy.
Lab: Therac-25 Case Study
Course: Intro to Systems
This lab, the last of the course, asks students to discuss the case of Therac-25, a medical device that delivered lethal radiation due to a race condition.
Large Language Models
Course: From Language to Information
Students solve a technical problem to understand how simple neural networks work. Then, more broadly, they are prompted to think about vulnerabilities in large language model (LLM) chatbots.
Making Value Judgments
• 80 minCourse: Algorithms
In this module, students return to the problem of incommensurable values and learn about moral theories that purport to tell us how to make value judgments in the face of seeming incommensurability. Students compare and contrast these theories and explore the limits to what they can tell us.
Medium-Fi Prototype
Course: Intro to HCI
This assignment asks students to consider what values are encoded in their product and the decisions they make in the design process; whether there are conflicting values; and how they address existing value conflicts.
Modeling Sea Level Rise
Course: AI Principles
This assignment is about Markov Decision Processes (MDPs). In Problem 5, we use the MDP the students have created to model how a coastal city government’s mitigation choices will affect its ability to adapt to rising sea levels over the course of multiple decades. At each timestep, the government may choose to invest in infrastructure or save its surplus budget. But the amount that the sea will rise is uncertain: each choice is a risk. Students model the city’s decision-making under two different time horizons, 40 or 100 years, and with different discount factors for the well-being of future people. In both cases, they see that choosing a longer time horizon or a smaller discount factor will lead to more investment now. Students then are introduced to five ethical positions on the comparative value of current and future generations’ well being. They evaluate their modeling choices in light of their choice of ethical position.
Naive Bayes, Sentiment, and Harms of Classification
Course: From Language to Information
Students solve a technical problem to understand the math behind the Naive Bayes sentiment classifier. Then, they are prompted to think about how the underlying mathematical assumptions of the Naive Bayes algorithm can cause ethical problems, like bias.
Needfinding
Course: Intro to HCI
With this assignment, students will reflect on the group of users their project is intended to serve; their reasons for selecting these users; the notion of an “extreme user;” and the reason why their perspectives are valuable for the design process. I also asks them to reflect on what accommodations they make for their interviewees.
POV and Experience Prototypes
Course: Intro to HCI
With this assignment, students are prompted to reflect on how proposed solutions to the problems they identify may exclude members of certain communities.
Privacy and End to End Encryption
Course: Programming Methodology
This lecture briefly introduces students to encryption as a powerful tool for protecting digital privacy concerns, while introducing the limitations in its ability to resolve certain privacy concerns.
Residency Hours Scheduling
Course: AI Principles
In this assignment, students explore constraint satisfaction problems (CSP) and use backtracking search to solve them. Many uses of constraint satisfaction in real-world scenarios involve assignment of resources to entities, like assigning packages to different trucks to optimize delivery. However, when the agents are people, the issue of fair division arises. In this question, students will consider the ethics of what constraints to remove in a CSP when the CSP is unsatisfiable.
Responsible Disclosure & Partiality
Course: Intro to Systems
This assignment is about void * and generics. We added a case study about responsible disclosure and partiality. Students read a summary of researcher Dan Kaminsky’s discovery of a DNS vulnerability and answer questions about his decisions regarding disclosure of vulnerabilities as well as their own thoughts on partiality. The starter code is the full project provided to students.
Responsible Documentation
Course: Intro to Systems
When functions have assumptions, limitations or flaws, it is vital that the documentation makes those clear. Without documentation, developers don’t have the information they need to make good decisions when writing their programs. We added a documentation component to this C string assignment. Students write a manual page for the skan_token function they have implemented, learning responsible documentation practice as they go. The starter code is the full project provided to students.
Sentiment Classification and Maximum Group Loss
Course: AI Principles
Although each of the problems in the problem set build on one another, the ethics assignment itself begins with Problem 4: Toxicity Classification and Maximum Group Loss. Toxicity classifiers are designed to assist in moderating online forums by predicting whether an online comment is toxic or not so that comments predicted to be toxic can be flagged for humans to review. Unfortunately, such models have been observed to be biased: non-toxic comments mentioning demographic identities often get misclassified as toxic (e.g., “I am a [demographic identity]”). These biases arise because toxic comments often mention and attack demographic identities, and as a result, models learn to _spuriously correlate_ toxicity with the mention of these identities. Therefore, some groups are more likely to have comments incorrectly flagged for review: their group-level loss is higher than other groups.
Toxicity in Gaming
• 50 minCourse: Intro to Game Design
This lecture addresses toxicity in gaming: what it is, what causes and exacerbates it, and what designers can do about it. In answering these questions, we draw on important and familiar examples from contemporary gaming, from Gamergate to League of Legends.
Trust and Context
Course: OS Principles
This lecture and accompanying assignments frame operating systems consider how contextual factors in the deployment of operating systems affect trust relationships.
Trust and How It Manifests
Course: OS Principles
This lecture and accompanying assignments frame operating systems as the public infrastructure of computing, which thereby requires trust by system programmers, application developers, and technology users. We then introduce a framing of trust as an unquestioning attitude, identify ways trust manifests through assumption and inference, and provides examples of partly substituting the need to trust through technical and socio-technical design.
Value Alignment
• 50 min (25 min each)Course: Reinforcement Learning
We say we want AI to be value-aligned. But what exactly does this mean? An AI agent might act differently depending on whether it is aligned to its user's intentions, its user's preferences, its user's best interests, or overall moral rightness. This pair of lectures gets students to consider these differences and reflect on whether different targets of alignment might be more or less appropriate for different contexts (for example, LLM chatbots). It also introduces them to top-down and bottom-up strategies for value alignment, and considers the technical and philosophical problems facing these approaches.
Values in Design
• 40 minCourse: Intro to HCI
The lecture presents the concept of values in design. It introduced the distinction between intended and collateral values; discusses the importance of assumptions in the value encoding process; and presents three strategies to address value conflicts that arise as a result of design decisions.
We could't find any materials that match that combination of CS and ethics topics.