We asked 10+ AI safety orgs about their hiring needs. What they need most — people who can hit the ground running.
We spoke with 10+ hiring managers from AI safety organisations engaged in technical research, policy work, and communications to understand their hiring needs.
The key challenge that surfaced was sourcing talent who can own and lead work independently, and care deeply about AI risk.
These are people with a strong track record, whether through years of professional experience or a few impressive projects, who can hit the ground running.
Here are some examples of what this experience looks like (these are more illustrative than specific!):
Engineering (Lead / IC)
You have the technical depth to make good architectural calls on complex systems — databases, ML pipelines, large codebases.
You’ve shipped high-quality code in demanding environments (e.g. big tech, high-velocity startup)
(Lead) You’ve led and grown engineering teams, and can set a high bar for your team
(IC) You can take vague requirements for a system and turn them into production-level code, even in unfamiliar contexts.
Research lead
You know what it takes to publish high-quality research end-to-end — scoping questions, choosing methodology, organising the work, and communicating findings to diverse stakeholders.
You’re a recognised expert in your domain, with the publications and collaborations to show for it.
You ask good questions that open up impactful research directions, and you lean on a strong professional network to get things done.
Communications lead
You have a track record of producing high-quality content tailored to specific audiences, and your public profile reflects that.
You’ve managed large project budgets and timelines, making evidence-based decisions to improve outcomes.
You’ve engaged seriously with AI safety — perhaps even producing content on it already.
Policy lead
You’ve worked directly with government and understand how to navigate political institutions.
You have a strong network in policy environments and know how to get things in front of the right people.
You have a track record of translating complex technical research into clear, actionable policy memos and briefings.
Note: these profiles are composites from the organisations we spoke to and are meant to be illustrative rather than specific.
You don’t need years of AI safety experience but you do need to care
Given that relatively few people can claim 5+ years of direct AI safety experience, orgs are looking for the next best thing — people with a strong track record of excellence on similar projects outside the field.
Mission alignment helps bridge the gap.
Orgs are looking for people who’ve engaged seriously enough with AI risk to articulate why this work matters from their own perspective, and ideally made career choices that reflect that conviction.
These are small (<50), mission-driven orgs with salaries that can’t compete with big tech companies. As a result, they are looking for those whose motivation goes beyond compensation.
How to get started
The biggest barrier for capable talent entering the field is context.
Our AGI strategy course is designed to give you that context. You’ll develop your understanding of the risks AI poses and explore how you might contribute your expertise.
It’s a free, 30-hour course that AI safety orgs view as a strong signal of motivation. It is the first step most take in their AI safety careers.
AI safety needs your expertise to help ensure a positive future with AI. Apply here to get started.

