| CARVIEW |
Reducing Societal-scale
Risks from AI
The Center for AI Safety (CAIS — pronounced 'case') is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence (AI) has the potential to profoundly benefit the world, provided that we can develop and use it safely.
In contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Featured CAIS Work
AI Safety Field-Building
Philosophy Fellowship
AI Safety, Ethics, & Society
Now accepting applications for the November 3 to February 1 session
The course offers a comprehensive introduction to how current AI systems work, their societal-scale risks, and how to manage them.
CAIS Compute Cluster
Compute Cluster
Enabling ML safety research at scale
To support progress and innovation in AI safety, we offer researchers free access to our compute cluster, which can run and train large-scale AI systems.
ML Safety Infrastructure
Philosophy Fellowship
Tackling conceptual issues in AI safety
The CAIS Philosophy Fellowship is a seven-month research program that investigates the societal implications and potential risks associated with advanced AI.

Dan Hendrycks
Director, Center for AI Safety
PhD Computer Science, UC Berkeley
"Preventing extreme risks from AI requires more than just technical work, so CAIS takes a multidisciplinary approach working across academic disciplines, public and private entities, and with the general public."
Risks from AI
Artificial Intelligence (AI) possesses the potential to benefit and advance society. Like any other powerful technology, AI also carries inherent risks, including some which are potentially catastrophic.
Current AI Systems
Current systems already can pass the bar exam, write code, fold proteins, and even explain humor. Like any other powerful technology, AI also carries inherent risks, including some which are potentially catastrophic.
AI Safety
As AI systems become more advanced and embedded in society, it becomes increasingly important to address and mitigate these risks. By prioritizing the development of safe and responsible AI practices, we can unlock the full potential of this technology for the benefit of humanity.




Our Research
We conduct impactful research aimed at improving the safety of AI systems.
Technical Research
At the Center for AI Safety, our research exclusively focuses on mitigating societal-scale risks posed by AI. As a technical research laboratory:
- We create foundational benchmarks and methods which lay the groundwork for the scientific community to address these technical challenges.
- We ensure our work is public and accessible. We publish in top ML conferences and always release our datasets and code.
Conceptual Research
In addition to our technical research, we also explore the less formalized aspects of AI safety.
- We pursue conceptual research that examines AI safety from a multidisciplinary perspective, incorporating insights from safety engineering, complex systems, international relations, philosophy, and other fields.
- Through our conceptual research, we create frameworks that aid in understanding the current technical challenges and publish papers which provide insight into the societal risks posed by future AI systems.
Learn more about CAIS
Frequently Asked Questions
We have compiled a list of frequently asked questions to help you find the answers you need quickly and easily.
CAIS’ mission is to reduce societal-scale risks from AI. We do this through research and field-building.
CAIS’ main offices are located in San Francisco, California.
By field-building, we mean expanding the research field of AI safety by providing funding, research infrastructure, and educational resources. Our goal is to create a thriving research ecosystem that will drive progress towards safe AI. You can see examples of our projects on our field-building page.
CAIS is always looking for value-driven, talented individuals to join our team. You can also make a tax-deductible donation to CAIS to help us maintain our independent focus on AI safety here.
Our work is driven by three main pillars: advancing safety research, building the safety research community, and promoting safety standards. We understand that technical work will not solve AI safety alone, and prioritize having a real-world positive impact. You can see more on our mission page.
As a technical research laboratory, CAIS develops foundational benchmarks and methods which concretize the problem and progress towards technical solutions. You can see examples of our work on our research page.
Thank you!
Want to help reduce risks from AI? Donate to support our mission
Learn more about AI Frontiers. No technical background required.