| CARVIEW |
Select Language
HTTP/2 200
date: Sun, 28 Dec 2025 23:52:22 GMT
content-type: text/html; charset=UTF-8
nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
age: 32697
cache-control: public,max-age=0,must-revalidate
cache-status: "Netlify Edge"; hit
report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=tUwqwxQpscM4zYAfJYXcSjK8tJDKtNFC%2Fvg%2BwnA5JkYQNuPrRH79j38St9wWURjLDZda6saxePGJT3moecdv34HKAVB9lEYa"}]}
server: cloudflare
strict-transport-security: max-age=31536000
vary: Accept-Encoding
x-nf-request-id: 01KDKP167GFNCRJT31MZJ4W1DF
cf-cache-status: DYNAMIC
content-encoding: gzip
cf-ray: 9b54f8921dc6b22e-BLR
Ought
Scale up good reasoning
Machine learning will transform any area of life that has abundant data and easily measurable objectives. But will it make us wiser?
Ought is a product-driven research lab that develops mechanisms for delegating high-quality reasoning to advanced machine learning systems.
Read our mission ->
Latest updates
AI Safety Needs Great Product Builders
We think that building products and testing out ideas is one of the best ways to have an impact on AI safety. This post explains why!
A Library and Tutorial for Factored Cognition with Language Models
Announcing the Interactive Composition Explorer (ICE) and the Factored Cognition Primer
How to use Elicit responsibly
How Elicit works and where it doesn't
Donate
We're a non-profit with 501(c) status. You can donate to support Ought's work on AI alignment and scaling up good reasoning.
Donate ->