I’ve reviewed ~1,000 applications for our Technical AI Safety course. I’ve found that most rejected applicants made at least one of these three mistakes:
misunderstood the course’s purpose,
lacked technical readiness or
did not sufficiently demonstrate commitment to making AI safer
Based on this, here’s some advice to help future applicants improve their chances of success!
Also, read our analysis of AI Alignment application mistakes. Almost all the advice there applies here too, especially:
Have a clear path to impact
Make your application easy to understand
Strike a balance with response length
Highlight impressive or relevant experience, even if it’s not a ‘formal’ qualification
Mistake #1: Misunderstood the course’s purpose
The course is focused on technical approaches to preventing catastrophic risks from AI, like power concentration, disempowerment, critical infrastructure collapse and bioengineered pandemics. It helps you to understand current safety techniques, where the gaps are and how you can contribute to plugging them.
Strong applicants demonstrated they understood this by:
Articulating specific risks they’re concerned about (not just “making AI safer” broadly)
Recognising how transformative AI could be, for better or for worse
Showing they want to contribute to pushing the frontier of safety techniques
If you’re new to AI safety, we’d recommend our Future of AI course to start.
If you don’t have a good sense of what it means for “AI to go well”, we’d recommend completing our AGI Strategy course first to have a big picture understanding of how you can contribute.
Mistake #2: Lacking technical readiness
Strong applicants demonstrated a background in ML, either through formal experience, education or personal projects.
They demonstrated that they have sufficient understanding of how LLMs are trained/fine-tuned to keep up with technical discussions that build on this. It’s hard to critique technical proposals for training safer AI if you don’t understand the basics of how they are trained in the first place!
We’re looking for evidence that you’ve engaged deeply with the concepts, not just consumed content about them. Watching videos about AI safety and reading LessWrong is a start, but it doesn’t show us that you can work with these ideas.
Some strong signals from non-technical backgrounds include but are not limited to:
Writing explainers of relevant technical concepts
Facilitating discussions that require you to explain technical concepts
Building and training your own simple neural networks (even if the code is messy!)
This course does not select against you if you don’t have a CS degree or work in tech. We’ve had successful applicants from philosophy, policy, biology, and business backgrounds. What matters is that you’ve put in genuine effort to understand the technical foundations.
Mistake #3: Insufficient evidence of commitment
We need people who’ll act on what they learn, not just learn for learning’s sake.
Some signals from strong applications include, but are not limited to:
Identifying specific organisations and roles that match your concerns about AI
Organising discussions, reading groups, or events around AI safety topics
Setting aside a non-trivial amount of time / resources to transition into AI safety
Building prototypes or tools related to AI safety
The top 20% of applicants showed they’re ready to take bold action (e.g. founding new initiatives, making significant career pivots, or leveraging unique positions of influence). But these aren’t the only paths. We’re looking for evidence that you’ll act on what you learn, whatever form that takes in your context.
It’s not about having the perfect plan. It’s about showing you’re already moving toward action, even if you’re still figuring out the specifics.
Applying to our course
The last common mistake is not applying at all, or forgetting to do so by the deadline! Now you know how to put your best foot forward, apply to our Technical AI Safety course today.

