| CARVIEW |
Select Language
HTTP/2 301
age: 54493
cache-control: public,max-age=0,must-revalidate
cache-status: "Netlify Edge"; hit
content-type: text/plain; charset=utf-8
date: Sun, 18 Jan 2026 04:48:12 GMT
location: https://forum.effectivealtruism.org/
server: Netlify
strict-transport-security: max-age=31536000
x-nf-request-id: 01KF7PX86140X0Y7J08W3JJEGZ
content-length: 50
HTTP/2 200
content-type: text/html; charset=utf-8
date: Sun, 18 Jan 2026 04:48:12 GMT
set-cookie: clientId=TZq5eHtHsCQYaBQJt; Max-Age=315360000; Path=/
content-encoding: gzip
server: nginx
vary: Accept-Encoding
access-control-allow-headers: x-datadog-trace-id, x-datadog-parent-id, x-datadog-origin, x-datadog-sampling-priority, traceparent
strict-transport-security: max-age=63072000; includeSubDomains; preload
x-cache: Miss from cloudfront
via: 1.1 70fc196525462d12303d29d0ef81c74e.cloudfront.net (CloudFront)
x-amz-cf-pop: BOM78-P6
x-amz-cf-id: c63A_Z0ULG6N1UPg9n0XL5GKkUulrbp_f1tMjnquqWNc7ZtI12mHxA==
Effective Altruism Forum
Posts tagged community
Quick takes
Show community
View moreI’d be keen for great people to apply to the Deputy Director role ($180-210k/y, remote) at the Mirror Biology Dialogues Fund. I spoke a bit about mirror bacteria on the 80k podcast, James Smith also had a recent episode on it. I generally think this is among the most important roles in the biosecurity space and I’ve been working with the MBDF team for a while now and am impressed by what they’re getting done.
People might be surprised to hear that I put ballpark 1% p(doom) on mirror bacteria alone at the start of 2024. That risk has been cut substantially by the scientific consensus that has formed against building it since then, but there is some remaining risk that the boundaries are not drawn far enough from the brink that bad actors could access it. Having a great person in this role would help ensure a wider safety margin.
According to someone I chatted to at a party (not normally the optimal way to identify top new cause areas!) fungi might be a worrying new source of pandemics because of climate change.
Apparently this is because thermal barriers prevented fungi from infecting humans, but because fungi are adapting to higher temperatures, they are now better able to overcome those barriers. This article has a bit more on this:
https://theecologist.org/2026/jan/06/age-fungi
Purportedly, this is even more scary than a pathogen you can catch from people, because you can catch this from the soil.
I suspect that if this were, in fact, the case, I would have heard about it sooner. Interested to hear comments from people who know more about it than me, or have more capacity than me to read up about it a bit.
Got sent a set of questions from ARBOx to handle async; thought I'd post my answers publicly:
* Can you explain more about mundane utility? How do you find these opportunities?
* Lots of projects need people and help! E.g. Can you contribute to EleutherAI, or close issues in Neuronpedia? Some more ideas:
* Contribute to the projects within SK’s github follows and stars
* Make some contributions within Big list of lists of AI safety project ideas 2025
* Reach out to projects that you think are doing cool work and ask if you can help!
* BlueDot ideas for SWEs
* I'm an experienced software engineer. How can I contribute to AI safety?
* The software engineer’s guide to making your first AI safety contribution in <1 week
* From a non-coding perspective, you could e.g.
* Facilitate BlueDot courses
* Give people feedback on their research proposals, drafts, etc.
* Be accountability partners
* Offer to talk to people and share what you know with those who know less than you
* Check out these pieces from my colleagues:
* How to have an impact when the job market is not cooperating by Laura G Salmeron
* Your Goal Isn’t Really to Get a Job by Matt Beard
* What is your theory of change?
* As an 80k advisor, my ToC is “Try and help someone to do something more impactful than if they had not spoken to me.”
* Mainly, this is helping get people more familiar with/excited about/doing things related to AI safety. It’s also about helping them with resources and sometimes warm introductions to people who can help them even more.
* Are there any particular pipelines / recommended programs for control research?
* Just the things you probably already know about – MATS, Astra are likely your best bets, but look through these papers to see if there are any low hanging fruit as future work
* What are the most neglected areas of work in the AIS space?
* Hard question, with many opinions! I’m particularl
Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week).
The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews.
More details are in the blog post; link to apply (due Jan 23 at 11:59pm PST).
I’ve donated about $150,000 over the past couple years. Here are some of the many (what I believe to be) mistakes in my past giving:
1. Donating to multiple cause areas. When I first started getting into philosophy more seriously, I adopted a vegan lifestyle and started identifying as EA within only a few weeks of each other. Deciding on my donation allocations across cause areas was painful, as I assign positive moral weights to both humans and animals and they might even be close in intrinsic value. I felt the urge to apologize to my vegan, non AI-worrier friends for increasing my ratio of AI safety donations to animal welfare donations, and my non-vegan, non-EA friends and family thought that donating to animals over humans was crazy. Now my view is something like: donations to AI safety are probably orders of magnitude more effective than to animal welfare or global health + development, so I should (and do) allocate 100% to AI safety.
2. Donating to multiple opportunities within the same cause area. Back in my early EA global health + development days, I found and still find the narrative of “some organizations are 100x more effective than others” pretty compelling, but I internally categorized orgs into two buckets: high EV and low EV. I viewed GiveWell-recommended organizations as broadly 'High EV,' assuming that even if their point estimates differed, their credence intervals overlapped sufficiently to render the choice between them negligible. This might even be true! However, I do not believe this to generalize to animal welfare and AI safety. Now I’ve come full circle in a way, and believe that actually, some things are multiple times (or even orders of magnitude) higher EV than other things, and have chosen to shut up and multiply. If you are a smaller donor, it is unlikely that your donation will sufficiently saturate a donation opportunity such that your nth dollar should go elsewhere.
3. Donating to opportunities that major organizations recommend
